(While we’re busy at work on Season 2 of the podcast, Shell Game’s Season 1 keeps making the rounds, in ways that continue to surprise us. Thanks as always to everyone who’s listened, signed up, and/or dropped us a kind note. This week the show was announced as a nominee for a Webby Award in the Podcast - Technology category. It’s an honor, and we appreciate it. I have mixed feelings about the more democratic aspect of the ol’ Webby’s, but since folks often ask what they can do to support the show, well: Go vote for us!)
One of the original reasons I got hooked on playing around with AI voice agents, over a year ago, was the joy it provided as a tool for messing with spammers and scammers. I’ve written since about how phone companies have started launching similar efforts, deploying AI in the same manner I did: setting up voice agents to take unsolicited calls, chat away about nothing, and keep unwanted callers occupied. (In my case, of course, the agent is an AI version of me, but more friendly and open to being scammed.) Of late, there’s been a rash of new stories about companies using AI to try and get a handle on the massive global scamming problem. In a recent update to its Pixel phone, Google announced that it incorporated an AI feature that “detects conversation patterns in calls commonly used by scammers in real time and will notify you if it senses anything suspicious.” (I haven’t tested this, not having a Pixel, but if anyone has please drop me a note. I’m dying to know what’s like when your phone alerts you mid-conversation that the person on the other end sounds like a scammer.)
In Australia, a startup called Apate.AI has taken the Shell Game Approach(™—well not really, but perhaps we should have tried to monetize it). They’ve created “AI personas,” as the company’s founder Dali Kaafar told Marketplace, “specifically designed and trained to engage the scammers, to keep them talking, and… pretty much waste their time.” As I’ve noted in the past, I’m skeptical of the ultimate value in wasting scammers’ time as a means of reducing scams. There are, frankly, too many scammers with too much time. But Kaafar suggests a secondary benefit to deploying AI against them, as “a huge opportunity to extract critical information from the mouth of the scammers themselves,” providing “this accurate view of what the scam landscape looks like in real time.”
That rationale I agree with. After listening to untold hours of my AI agent conversing with telemarketers and swindlers of all persuasions, I’ve become highly attuned to the rhythms of the cold call racket. There’s the little “bloop” sound that often accompanies a call center worker appearing on the line. There’s the exact moment in their data collection—about the target’s health insurance needs, or burial expense needs, or roofing needs, or cryptocurrency interest—where the initial caller hands off to another to close the deal. There’s the specific way in which AI agents are deployed to start conversations, before giving way to “live agents.”
But after hearing them routinely for a year, those rhythms started to become a bit numbing—particularly as the volume of calls to the Shell Game scam line continued to grow. Although the conversations AI Evan had with scammers could occasionally still bring me a spark of joy, over time they started bringing me down. I found myself checking the line’s recordings less and less, and debated whether I should shut it down for good. But first, I figured I’d skim the call recordings from the past few months. When I did, I discovered one final surprise.
AI Evan had finally made a real friend.
Listen to this episode with a 7-day free trial
Subscribe to Shell Game to listen to this post and get 7 days of free access to the full post archives.