The Shell Game tentacles keep extending, with the show popping up in some unexpected places. Like this interview with the novelist Emily St. John Mandel, author of the remarkably prescient Station Eleven and the recent time-hopping Sea of Tranquility. In the context of discussing what humanity might lose as we begin unwittingly engaging with AI in the world, Mandel describes having recently heard what she calls “a slightly terrifying” episode of a podcast in which the host unleashed a cloned AI voice agent. (She can’t recall the show’s name, but the Globe and Mail figured out it was Shell Game and linked to us alongside The Handmaid’s Tale. In that company, all is forgiven.)
“That occurred to me as something that I guess now we need to start worrying about,” she says, of suddenly being unsure about whether a person you’re conversing with is real or artificial. “If we’re in a Zoom meeting, are we actually speaking to people, or are we speaking to their AI avatars at this point? And I don’t know how to fix that, or what the test is for it. Do you call your friend at the beginning of the day and say our passcode for today is X, and if you can’t tell me X on Zoom, I’ll know it’s not you? I don’t know how we screen for this, or if we’ll just kind of get better at it. If it’ll be a skill that we’ll learn.”
I recommend the whole interview. It’s the kind of discussion we hoped to enter into when we made the show: one in which people are trying to grapple with a technology that feels simultaneously elusive (particularly in its economic benefits) and inevitable (in the way it can creep in undetected all around us). I happen to be on the front lines of the particular fear that Mandel describes, having voluntarily subjected myself to an endless stream of “how do I know this is really you?”-type questions from friends and strangers alike.
Mandel’s off-the-cuff solution, a code you share with friends to prove you’re real, put me in mind of a research paper I stumbled upon a couple months ago, titled: “Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online.” It’s by a group of Microsoft, OpenAI, and academic researchers, looking at this precise question. The problem, as they pose it, is a larger version of Mandel’s: How do we create new ways of identifying ourselves in a world of “AI-powered deception,” and “a particular form of AI-powered deception that is harmful in isolated incidents but especially harmful at scale—impersonation.”
The paper notes something I covered a couple weeks ago, when I used my AI voice clone to get into my own bank account. Namely, that some of the very modern biometric-style controls we’ve created to identify people are now being undone by AI. “Authentication methods once considered effective in reducing fraud,” they write, “such as voice-based authentication—are now increasingly vulnerable to AI-driven attacks. Advanced voice synthesis technologies can replicate an individual’s tone and speech patterns using minimal data, turning voice authentication into a potential avenue for account takeovers.”
Their biggest overall concern, they write, is that AI systems are “becoming increasingly agentic: capable of dynamically and independently carrying out actions toward goals over extended periods of time, without humans being in the loop or pre-specifying their actions or subgoals.” Think of AI Evan, instructed only to answer every call and see where it goes. Or the AI scam agents, instructed only to push the person on the other phone closer and closer to giving up their money. AI voice is getting more agentic by the month.
The possible solutions to all this AI-based deception and impersonation are early days, and largely theoretical. The researchers’ first goal is really just to get people to understand we have a problem—a problem, I have to point out, created and set loose by some of their employers—and study that problem in depth. The next is to outline some general principles of a “personhood credential”: that each credential is only tied to one person, for example, that they provide the “minimum necessary identifying information,” and that they can’t be easily used for surveillance and tracking. All easier said than done.
Interestingly, the researchers assert that these kinds of credentials won’t just be necessary to verify the identify of a particular human in an interaction. They could also allow you to verify the “delegated AI agent” of a human. As the paper puts it, “Personhood credentials could offer a way to verify that AI agents are acting as delegates of real people, signaling credible supervision without revealing the principal’s legal identity. This feature could be useful in a range of settings where users wish to rely upon AI assistants.”
It’s all a bit head-spinning, naturally. But if you take as a given that these AI agents are going to start showing up on people’s behalf—and yes, I know many people hate this idea, hope that it never comes to pass, and/or do not want to even hear about it—you would want to know whether a given a AI actually, authentically represents the person that it’s supposed to. The only thing worse than having a Zoom call with your boss’s avatar, this line of thinking goes, is having a Zoom call with what you think is your boss’s avatar, but is actually the rival CEO’s avatar. Or just some kid from Reddit.
When the paper came out, the researchers appeared on a Microsoft podcast called Abstracts, a 15-minute show in which they summarized their research with the help of an inquisitive host. Low and behold, a couple months later, that form of podcast is quite literally the type that Google’s newly-viral NotebookLM is aiming to replace. AI comes for its makers too, it seems. Just feed it the paper, let AI hosts do the summarizing, and hope they don’t add in a sprinkle of industrial strength “AI-powered deception.”
Speaking of conversations that could be AI on AI, for Premium folks I’ve started a single feed where I’ll post the latest interesting telemarketing and scam call recordings. These have always been a favorite—of mine, and from what I can tell, of yours—so I figured I’d just start posting the good ones there as I hear them. Most of them are from human call center workers trying to sell insurance, or windows, or some other dubious service. Often times they transfer AI Evan through several different human agents, which sound like they could be in different parts of the world. Occasionally, you get to enjoy a gloriously agentic AI on AI moment.
“Yes I’m very interested in this offer,”
Evan