Chapter 1: The Wunderkind
Back in the nineties, a self-taught teenage supergenius joined an obscure mailing list. That odd seed led directly, via Harry Potter fanfic, to today's prominent Effective Altruism and AI Risk movements; to bizarre, cult-like communities whose highly intelligent members feared demons in their minds might cause them to create hells or accidentally exterminate humanity; and, indirectly, to the birth of Bitcoin. Intrigued? Come down the rabbit hole into the epic, surreal, barely plausible saga of Extropia's Children.
i. Origin Story
“I don't have time to be a teenager,” wrote 17-year-old Eliezer Yudkowsky, already obsessed with his lifelong mission to save humanity from itself. Poor kid. He had already put up a web page informing the world of his genius, including the important points “I am not a Nazi,” “I am not a teenager,” “I don't think I'm better than 'normals,' or as I call them, 'full humans,'“ “I'm not a self-righteous loon,” and “I'm not overestimating my intelligence.”
He wasn't. He reports getting an SAT score of 1410 aged eleven — non-Americans, trust me, this is absurd — but there's no need to look at test scores. His spectacular intelligence and florid neurodivergence radiate through his wildly prolific teenage output, as they will throughout his life. But as he will discover, to his great dismay, extreme intelligence doesn't inoculate you against being wrong; it just makes you wrong in more interesting ways.
Eliezer Yudkowsky: born 1979 into an Orthodox Jewish family in Chicago. His father was a physicist turned Intel / Bell Labs technical architect, focusing on speech technology, well known in science fiction fandom; his mother, a psychiatrist. Eliezer started reading SF aged seven, started writing it aged nine, and was leagues ahead of his grade-school compatriots ... until, in seventh grade, he “suddenly lost the ability to handle school. [...] 'I can't do anything.' That's the phrase I used.” Despite his parents’ best attempts, he did not attend high school.
He has referred to himself as “previously condemned to an Orthodox Jewish education” and said “My own experience with growing up in an Orthodox Jewish family ... broke my core emotional trust in the sanity of the people around me,” but also: “I never hated my parents. They said they loved me, and the cultural wisdom of the science-fictional literature said your parents genuinely love you no matter what you disagree about ... People can love you and care about you and want what's best for you and still use their power to screw up your life and put you through hell. Behold the foundations of my morality. Sometimes, awful things happen and there isn't anyone you can rationally blame for it.”
[Those links are only a few of his many fragments of online memoir. When a hyperlexical genius becomes a quasi-guru to throngs of logorrheic overthinkers, the public record gets overwhelmingly overstuffed. This could all be subtitled “In which your narrator reads a lot of Yudkowsky and co. so you don't have to.”]
ii. The Singular Extropians
Wikipedia describes the Singularity as “the hypothetical point at which technological growth becomes radically faster and uncontrollable, resulting in unforeseeable changes to human civilization.” Much has been written about this notion since its introduction by legendary mathematician John Von Neumann and popularization by SF author Vernor Vinge. Many, including me, have described it as “a theological belief masquerading as a technological one” or, more derisively, “the Rapture of the Nerds.” For teenage Eliezer Yudkowsky, the discovery of the Singularity immediately — and to this day — resolved the question of the meaning of life. He was sure it would happen by 2020.
Singularitarians were also, almost invariably, Transhumanists (believers in surpassing bodily limits entirely) and Extropians (believers in accelerating the evolution of the human condition.) Aged seventeen, Yudkowsky found his way to the small, obscure "extropians" mailing list, probably via a 1994 Wired article. Its discovery must have been a profound relief. Finally. At last. His people.
A remarkable group. Names already famous in tech circles — Marvin Minsky, Eric Drexler, Ralph Merkle, Ray Kurzweil, Steve Jurvetson — attended Extropian parties at "Nextropia," the "nerd house" in Cupertino; spoke at Extropian conferences; contributed to Extropy magazine; and/or chatted on the mailing list. More remarkable yet is the appearance of numerous names not yet (in)famous, in this list :
Julian Assange, doomed to infamy.
Nick Bostrom, who would author the NYT bestseller Superintelligence.
Nick Szabo and Wei Dai, Bitcoin's godfathers.
Hal Finney, either the second person to ever mine Bitcoin... or the first. (Much more about the cryptocurrency connections later.)
Crypto-anarchist Timothy May.
Future bestselling SF author Charles Stross.
Seminal game designer Steve Jackson.
Steganographer Romana "Cypherella" Machado.
Author / technologist / investor Ramez Naam.
Roboticist Hans Moravec, whose paradox plagues AI researchers to this day.
All from a tiny mailing list numbered in the hundreds. It's as if a single tiny village was the hometown of a dozen different celebrities. (That we know of. It's accepted that 90% of most online communities read without posting; we can only speculate who else may have lurked amid the extropians.)
iii. Young Eliezer
It is clear from even a casual perusal of the Extropians archive (maintained by Wei Dai) that within a few months, teenage Eliezer Yudkowsky became one of this extraordinary cacophony's pre-eminent voices. Extropian Eliezer is impressively sympathetic. He describes himself as “an Algernon,” after the famous story about a man made so intelligent he can no longer befriend other humans. He writes: “The only reason I'm here is to save humanity,” and “if I could find a way to spend every waking minute working, I would do so.” He speaks highly of “the masses,” and is generally remarkably what-would-today-be-called-woke for the nineties in general, much less an extremely online teenage boy.
He is also absolutely convinced the end of the world as we know it is nigh, in the form of the Singularity. He says “the Singularity may well be before 2000,” and calls 1997 “the leading edge of the Singularity.” He cannot wait. The man who will become the world's leading doomsayer of catastrophic artificial intelligence is convinced at this age of the exact opposite: that superintelligences will necessarily be far more ethical than us.
His fatal intellectual flaw is already apparent. His self-proclaimed ‘landmark work’ is, allegedly, “A Practical Guide to Neurosurgical Intelligence Enhancement Using Current Technology.” Unfortunately there is nothing practical about it. This is unsurprising; however precocious, he's only seventeen! The problem is that he doesn't seem to realize it's vague, theoretical, and handwavey, the polar opposite of a practical guide. Similarly, his next major work, “Coding a Transhuman AI,” is a 50,000-word essay which suggests many notions but, despite its title, contains ... no actual software code. He will completely rewrite this piece a few years later. The rewritten version? Still no code.
This “lots of concepts, zero implementation” pattern keeps recurring: “After halting work on the ill-fated business concept, I spent the next few months writing 500K of design notes for a programming language called Flare.” Note: design notes, rather than the actual language, or, say, a compiler. Flare never launched. Eliezer can write code; he just never ships anything.
Over in the cypherpunks mailing list — about which, again, more later — real hackers remind each other: “cypherpunks write code.” Eliezer never really does. Joel Spolsky will soon write a famous essay exhorting companies to hire those who are "Smart, and Get Things Done." Eliezer is an extreme case of "Smart but doesn't Get Things Done." He will spend his life a theoretician, with zero formal training, not a practitioner. Because of this, successful practitioners will tend to treat him with suspicion.
iv. The Institute, The Catastrophe, and The Transformation
In 1999, Eliezer writes “I don't have a life. I have a job.” He means making the Singularity happen; he means job metaphorically, But next year it literally becomes his job. Aged 21, he moves to Atlanta; co-founds a non-profit called the Singularity Institute for Artificial Intelligence; and for the next three years, Eliezer Yudkowsky, and his mind undoubtedly in the top percentile of the top percentile of the top percentile, is paid $24,000 annually to ... well ... do his thing.
This admittedly still beats a lot of grad student stipends, and Eliezer is hardly materialist. He enjoys The Matrix, and Buffy The Vampire Slayer ... and they lead him to write “Like Buffy, there's something I can do better than anyone else in the world, and that means I can't have a life” and “If I choose to live my life for the Singularity, nobody has any right to demand that I live it for myself” and “Brian, you and I have the same chance of dying of old age as we have of dying of ovarian cancer.” (Also perhaps worth noting: he already opposes the use of AI in the justice system, especially “Opaque AI like neural networks.”)
He is on the brink of his great catastrophe.
In years to come, he will write “A good deal of the material I have ever produced —specifically, everything dated 2002 or earlier — I now consider completely obsolete” and “You should regard anything from 2001 or earlier as having been written by a different person who also happens to be named "Eliezer Yudkowsky". I do not share his opinions.”
This is objectively incorrect. His life since, and today, was/is no less driven by the ideas of AI and the Singularity. In truth his views and values hew much closer to those of his 17-year-old self than those of the vast majority of middle-aged men. But there was indeed a sharp break, a distinct phase change, in his attitude towards the future.
Naturally, he wrote about it at great length in one of his Sequences, a series of collections of essays describing his new overarching intellectual framework called "rationalism." The final Sequence, Coming of Age, is a ~20,000-word depiction of his own journey to rationalism. The inciting incident? ...The realization that when he was younger, he had made a mistake.
If this seems like a whole lot of angst and processing to come to terms with having once been wrong when a teenager ... well, yes. "Coming of Age" is exceedingly navel-gazing. Granted, the subject in question is, to Eliezer at least, the most important question in all of human history; and the Sequence does explain, in detail, how he went from “building a superintelligent AI is the immediate and most important goal for all humanity” to “building a superintelligent AI is a catastrophically dangerous idea which could lead to the imminent extermination of all humanity.”
Note that the first view had been no mere opinion, but the cause he had devoted his entire life to ... and the reason he otherwise had no life. It must have been a shattering experience. Perhaps this is why "Coming of Age" also, oddly, describes another form of novel cognitive dissonance; encountering people who did not think Eliezer was the most intelligent person they had ever met, and then, more shocking yet, personally encountering people who seemed possibly more intelligent than himself.
It's very easy to laugh at him for this, especially since the bar is so high: the intelligences in question were legendary mathematician John Conway and superstar venture capitalist Steve Jurvetson. (I've met Jurvetson a few times. After the first I texted a friend: “Every other time I’ve met a VC I walked away thinking ‘Wow, I and all my friends are smarter than you.’ This time it was ‘Wow, you are smarter than me and all my friends.’“) But remember, Eliezer is a genius with no formal education, still in his early twenties, already undergoing a massive epistemological shock. Have a little sympathy.
Even if you still find that comical, what happens to him next is desperately tragic; the death of his nineteen-year-old little brother in 2004. A few years earlier, heartbreakingly, he had excitedly told the mailing list: “My little brother just wrote his first function! In Python.” What he wrote about that same brother's death is utterly heart-wrenching, and an angry scream in the face of the enemy called death. In an update six months later, he thunders: “You do not make peace with darkness! You do not make peace with Nazi Germany! You do not make peace with Death!”
v. The Promulgation
In the subsequent years his career takes off. The Singularity Institute, which will become the Machine Intelligence Research Institute aka MIRI, grows into prominence. Its advisory board is a futurist who's who: Ray Kurzweil, Peter Thiel, Aubrey de Grey, Christine Peterson, Nick Bostrom. He and it move to California. Its revenue jumps from $33,000 in 2001 to $466,000 in 2006, and his salary to a (still modest) $63,000. He launches a new mailing list, SL4.
Then, from 2006 to 2009, in what can be interpreted as an attempt to discover how his younger self made such a terrible mistake, and to avoid doing so again, Eliezer writes the 600,000 words of his Sequences, by blogging “almost daily, on the subjects of epistemology, language, cognitive biases, decision-making, quantum mechanics, metaethics, and artificial intelligence” at Overcoming Bias, a blog he shares with prominent economist (and fellow extropian alumnus) Robin Hanson.
The Sequences are still replete with discussion of AI risk. But, crucially, they focus more on “how best to think,” “how to overcome your unconscious biases,” “how and how much to change your mind in the face of new evidence,” and generally “how to be a more rational and therefore more effective human being.” Yudkowsky subsequently uses them to seed a new online community called LessWrong, which quickly becomes the undisputed hub of the burgeoning “rationalist” community.
What really got people's attention, though, was his Harry Potter fanfic.
In 2010, during which the Singularity Institute's revenue exceeded $900,000 — roughly twice that of two years earlier, half that of two years later — Eliezer began a 122-chapter, 2184-page, 660,000-word fanfiction novel, Harry Potter and the Methods of Rationality, which he would finally finish five years later. Despite its enormous length it was hugely successful. Its has more than 15,000 Goodreads ratings, and more than 36,000 reviews on fanfiction.net. It was lauded in The Atlantic, Vice, and The Hindustan Times. It has been translated into twelve languages, and its translation was briefly the most lucrative Russian crowdfunding project of all time.
Between his Sequences and his Harry Potter fanfic, come 2015, Eliezer had promulgated his personal framework of rational thought — which was, as he put it, “about forming true beliefs and making decisions that help you win” — with extraordinary success. All the pieces seemed in place to foster a cohort of bright people who would overcome their unconscious biases, adjust their mindsets to consistently distinguish truth from falseness, and become effective thinkers who could build a better world ... and maybe save it from the scourge of runaway AI.
Which is why what happened next, explored in tomorrow’s chapter — the demons, the cults, the hells, the suicides — was, and is, so shocking.