The seven-part series Extropia’s Children with which this Substack launched—about how rationalism, cryptocurrencies, effective altruism, and modern AI philosophy were all birthed from an obscure 90s mailing list—has attracted a burst of attention lately. As such I thought it would be useful to create a single canonical page with links to all seven chapters, and “nine months later…” updates for each.
Without further ado, Extropia’s Children (plus updates):
The Wunderkind. Back in the 90s, a self-taught teenage genius joined an obscure mailing list. That odd seed led directly, via Harry Potter fanfic, to today's prominent Effective Altruism and AI Risk movements; to bizarre, cult-like communities whose highly intelligent members feared demons in their minds might cause them to create hells or accidentally exterminate humanity; and, indirectly, to the birth of Bitcoin. Come down the rabbit hole into the epic, surreal, barely plausible saga of Extropia's Children.
(Postscript/update: I’ve seen several armchair psychologists suggest Yudkowsky’s turn from AI optimism to AI doom was triggered by his brother’s tragic death. The record does not support that. If one were to localize his ‘conversion’ to a single month, it would be May 2003, 18 months earlier. That was also the genesis of his ‘Sequences’ and Harry Potter fanfic: “I may have to … write something to attract the people we need. My current thought is a book on the underlying theory and specific human practice of rationality … [It’s] a roundabout approach, and I strongly dislike that, but it's the only method I can think of.”)This Demon-Haunted World. All the pieces seemed in place to foster a cohort of bright people who would overcome unconscious biases, adjust their mindsets to consistently distinguish truth from falseness, and become effective thinkers who could build a better world ... and maybe save it from the scourge of runaway AI. Which is why what happened next—the demons, the cults, the hells, the suicides—was, and is, so shocking.
(Postscript / update: I subsequently discovered a Feb 2023 post reporting that Ziz, who looms over this chapter, went missing-presumed-dead in a boating accident last August … but had in fact faked her own death(!) … and was as of that writing in custody in Pennsylvania, “plausibly involved in [a] double homicide.”(!!))Extropicoin Extrapolated. Let me reassure you; the disturbing stuff is now mostly behind us. In this chapter we climb into a new rabbit hole, one far more fun and scarcely less weird: that of the birth of cryptocurrencies.
(Postscript/update: it’s perhaps worth noting that Caroline Ellison, the former CEO of Alameda Research who was central to the spectacular collapse of FTX, was apparently hugely influenced by Yudkowsky’s Harry Potter novel.)What You Owe The Future. The most prominent intellectual descendant of the extropians is the enormously successful, and increasingly controversial, movement known as “effective altruism.” Still, puzzled readers may well ask: what even is effective altruism, and what does it have to do with AI, rationalism, and the ancient extropians mailing list?
(Postscript/update: this was written before the fall of FTX, which seems to have tainted the entire effective altruism movement; one could argue that AI doom, not EA, is now the extropians’ most prominent intellectual descendant.)Irrationalism. What was the great rationalist diaspora? What has the Machine Intelligence Research Institute actually, well, done? And why—given the amazing rise of his philosophies—is Eliezer Yudkowsky now seemingly wracked by utter despair?
(Postscript/update: in the nine months since, MIRI has shipped no code and published no research, They have however written many despairing blog posts, and appeared on a lot of podcasts.)Slate Star Cortex and the Geeks For Monarchy. That LessWrong became home to the neoreactionaries—who were, in turn, only a small fraction of its conversations—seems fairly well accepted. How did a custom-built online home for rational thinking attract a thriving coterie of far-right authoritarian racists?
(Postscript/update: not much to mention here except that AI risk is pretty frequently encountered on Astral Codex Ten these days.)The Inferno of the Nerds. We end this series, inevitably, with an attempt to evaluate answers to the question: “how bad is this threat of human extermination that everyone keeps talking about? What is the fabled AI risk?”
(Postscript/update: AI x-risk is now a commonplace major media subject … with the level of analysis you’d expect. But there are more serious thinkers too; I particularly appreciate Rohit Krishnan’s independent Drake Equation for AI risk.)
In semi-related news, my AI novel Exadelic hits bookstore / Kindle shelves exactly three months from today. Pre-order early, pre-order often! (Conventional publishing wisdom says pre-orders are super important. I’m a little dubious…but they can’t hurt.)
That’s all, folks. Regular Gradient Ascendant programming resumes later this month.
Hello. As you know Jon, I have never used yours (or anyone else's) substack to direct readers to my eccentric little newsletter. But there's always a first time. About the AI existential risk and moratoriums etc. I think we have all been coming at it from the wrong direction, including the doomers. There's a profound ethical dimension to it that has not been discussed. I go over it in my last three posts in my substack if anyone wants to wander over there and read them (they're fairly short)
Our greatest existential risk from developing AIs is not from our creations, but to ourselves as would-be ethical beings... We're on the verge of a a huge mistake.
I may be wrong, but we must stop worrying so much about aligning AI with our values, and start worrying about aligning ourselves with our own ethics.