When AI doomerism spirals into violence
The backstory of my two years on the trail of the Zizians for WIRED, and a personal history of the ideas that set them on a path to mayhem.
Last Friday, Wired published my 14,000-word narrative account of the Zizians, a group that I’d been quietly reporting on since early 2023. You may have come across a baffling headline or two about the Zizians over the last few weeks. After a Border Patrol agent was killed in a shootout with two members of the group in Vermont in January, the events surrounding them have been speedrun simultaneously through the media flood zone, the tabloid grinder, the influencer engagement farms, and the the true-crime industrial complex. (If you’re a new Shell Game newsletter subscriber you may also wonder: What does this have to do with voice cloning? Well: Shell Game is broadly about not just voice AI, and not even just AI, but a broad spectrum of modern phenomena that have a similar things-are-not-what-they-seem vibe to the show’s first season. Most of the things I work on tend to occupy this space.)
The tale of the Zizians is complicated and tragic, extremely difficult if not impossible to summarize in a coherent way. It’s the story of a handful of young, gifted people who were attracted to a Bay Area community, the rationalists, brimming with ideas about self-improvement and saving the world. The Zizian faction, however, became disaffected with that larger community. And inspired if not directed by the dark and impenetrable philosophy of one of their number, Ziz LaSota, they set off on their own path to enlightenment. That path, so far, has led them to an alleged association with six killings and two suicides, with seven members of the group currently in custody.
I don’t ever presume anyone wants to consume 14,000 words of my prose, but if that description intrigues you, reading the whole story is really the only way to get a handle on it.
One of the reasons Wired decided to run the article so long—besides the fact that we were sitting on years of reporting that no one else had—was that many recent news accounts of the group have been… opaque, at best. (Updating to note: There has been some really strong coverage too, particularly among local reporters at VTDigger and Open Vallejo, at the SF Chronicle, and independently by Kenny Jones, who compiled a massive timeline of facts about the story.) At worst, they’ve simplified the story to the point of falsehood. In some versions it’s the saga of a “trans cult” (because a large proportion of the group was trans), in others it’s that of a “vegan cult” (one of their tenets was an extreme commitment to veganism), in still others it’s one of anti-fascists gone wild (because they espoused opposition to certain political forces on the rise). Often they are described as “geniuses,” when in fact they were all academically and technically gifted but not known for any particular brilliance. The Zizians are, in a way, a culture war funhouse mirror for our times.
But one aspect of the Zizians’ story that has often gotten lost, so far, is the fact that what brought their initial members together was not any of the notions above. Their first commonality, roughly speaking, was their connection to the idea that unfriendly, superintelligent AI would someday destroy the world, and something needed to be done to stop it. As it happens, my own connection to the story, and my reason for taking it on two years ago, sprung from my own 25-year history with that same idea.
Way back in the summer of 2001, having just departed the staff of Wired magazine and trying to make my way as a freelancer, I took a side job helping research a book by the technologist Bill Joy. A co-founder of Sun Microsystems and one of the minds behind the Java programming language that powered much of the online revolution, Joy shocked the tech world with a 10,000-word cover story for Wired the previous year, titled “Why the Future Doesn’t Need Us.” (Even today, it’s the very rare magazine story with its own Wikipedia page.) Joy’s thesis was that the acceleration of in each of three technological categories—advanced biotechnology/gene manipulation, robotics/AI, and nanotechnology—had the potential to create new existential threats to humanity, on par with nuclear weapons. If we didn’t work to contain those threats, Joy wrote, we risked runaway scenarios in which viruses manipulated in labs became global pandemics, self-replicating nanobots turned the world into “gray goo,” or robots powered by superintelligent AI discarded humans entirely.
At the time, I was Wired’s research editor and the editor of the letters page—our best way to judge reader interest in the pre-social media, things-going-viral days. The story was an absolute sensation, likely unsurpassed in the magazine before or since. Joy got a deal to expand “Why the Future Doesn’t Need Us” into a book, and I—having been the fact-checker on the magazine story—signed on as his researcher.
That’s how I first came to meet a young AI enthusiast named Eliezer Yudkowsky.
Keep reading with a 7-day free trial
Subscribe to Shell Game to keep reading this post and get 7 days of free access to the full post archives.