On Crying Wolf
Who's that knocking at the door? Is it... could it be... oh, never mind, it's DoorDash.
Last weekend the tech world was glued to its wackiest soap opera since ... last November's batshit roller-coaster tale of effective altruism and a controversial 30something tech CEO named Sam. This year’s saga, greatly condensed since if you’re reading this you’re probably already all too familiar:
The Story So Far
Fri Nov 17 -- The board of OpenAI, the hottest and most prominent company in tech, abruptly fires CEO Sam Altman; demotes board chairman Greg Brockman; and names Mira Murati interim CEO. Brockman immediately quits as president as well. This is announced before markets close; Microsoft promptly loses $50B of market value.
Sat Nov 18 - The response is eruptive. OpenAI employees threaten to quit en masse if the board does not immediately reinstate Altman & Brockman and then resign themselves. Murati takes their side. Microsoft CEO Satya Nadella tries to broker a deal.
Sun Nov 19 - After seething negotiations, the board — widely viewed as guided by the tents of effective altruism, and in particular EA’s “longtermist” fears of AI so powerful it harms or even exterminates(!) humanity — instead doubles down, and hires onetime Twitch CEO Emmett Shear as OpenAI’s new new interim CEO. Shear declares an emergency Sunday all-hands meeting. All hands decline.
Mon Nov 20 - Microsoft announces they are hiring Altman and Brockman to run an independent AI wing of Microsoft Research. 97%(!!) of OpenAI employees declare their intent to follow if the board doesn't reinstate & resign. Among them is ... OpenAI chief scientist and board member Ilya Sutskever(!!!) who also tweets that he "deeply regrets" his part in the board putsch. New CEO Shear reports that he cannot get a written explanation from the board of why they fired Altman(!!!!) The board briefly explores a merger with rival frontier lab Anthropic, which itself split off from OpenAI some years ago.
Tue Nov 21 - After mediation by Shear and Nadella, Altman & Brockman are reinstated, and all but one board member (Quora CEO / former Facebook CTO Adam D'Angelo) resigns, replaced by economist Larry Summers and another former Facebook CTO, Bret Taylor, as an interim three-man board who will determine a full nine-person board in the fullness of time. The drama seems over, modulo the remarkable fact that we still don’t know why the board did it—
...until Reuters and The Information report that, weeks ago, OpenAI researchers made a major AI breakthrough, and, according to Reuters, "wrote a letter to the board of directors warning of a powerful AI discovery that they said could threaten humanity."(!!!!!) This is promptly disputed, and I for one am deeply skeptical that this breakthrough had anything to do with the firing; but, as you'd expect, the reports stir an online hornet's nest of wild paranoia and speculation, which as of this writing has yet to abate.
The Stories To Come
I give you not a prophecy but grim acceptance: that hornet’s nest won’t go away. Ever. Oh, it will wax and wane, but it will rage for the rest of our lives. “AI doom,” “summoning the demon,” “AGI is an existential threat to humanity,” etc., are a perfect storm of vaguely plausible, unthinkably disastrous, and entirely unfalsifiable, meaning these arguments will continue ad nauseum …
…unless.
There is a scenario in which we stop arguing. If doomers continue to scream doo-oo-oom! with every step forward, every year or even month; if we adopt new advances nonetheless and see they’re not just harmless, but so anodyne that it’s hilarious they ever inspired such fear… the doomers will turn from figures of fear to figures of mockery, and eventually wither away like the “cell phones cause cancer!” folks.
I myself am not, at all, a doomer. I’m willing to worry about one or two hypotheticals, but not (yet) a very long and increasingly implausible chain of them. And yet I think such an outcome would be quite a bad thing! Because this is a very old story indeed: it is The Boy Who Cried Wolf.
Of course the motivations are different. The boy in the fable was deliberately deceptive. AI doomers are, despite the skepticism prevalent outside the industry, sounding alarms in genuine fear and good faith. But every time they tell people to interpret the first in a long chain of hypotheticals as an imminent concrete threat, the effect is the same; people wait for their latest dire warning to turn into actual danger, and it doesn’t, and their credibility takes yet another major hit.
I expect we’ll make extraordinary strides in AI over the coming years. (I’m both an AI tinkerer and AI SF author after all.) And I agree in principle that as the links in the chain of AI accomplishments turn from “hypothetical” to “achieved,” we will eventually reach a point at which we should maybe move more cautiously. So if by the time we do get there, many years hence, AI doom has long been written off as an irrelevant laughingstock moron belief … that would be bad. The fable of the boy who cried wolf has many messages, one of which is aimed at the village.
Unfortunately the current climate of relentlessly prophesying imminent apocalypses is such that a wolf-boy outcome seems all too plausible. Maybe the prospect of an AI Inferno is too compelling for us to tune the doomers out completely, even as they are consistently wrong about everything for years to come. Maybe. But if not… it feels like they’re expending all their valuable cultural ammunition long before it might actually be needed.
I mean, let’s face it, there’s already a lot to laugh at them about.
The Stories Gone By
2005-07: Eliezer Yudkowsky, high priest of AI doom, “worked at various times ... on the technical problems of AGI necessary for ‘Friendly AI.’ Almost none of this research has been published, in part because of the desire not to accelerate AGI research without having made corresponding safety progress,” as announced years later by the executive director of his institute. Did this research bear any fruit at all? Clearly not. Do those particular fears look hilariously silly now? Yep.
2013-2016: https://twitter.com/fchollet/status/1727798735676350553
2019: “AGI and ASI are still scifi concepts for now, but they’re coming closer and closer to reality at a rapidly increasing rate … DeepMind and OpenAI, have both opted to focus on the same area of AI research for getting to AGI: deep RL (reinforcement learning).”
Also 2019: OpenAI basically writes Deep RL off as a dead end (for now) in favor of transformer architectures and large language models. DeepMind soon follows. That same year, OpenAI also declines to release its GPT-2 model, warning darkly that its potential for generating misinformation made it too dangerous. …A few months later they realize it is not in fact too dangerous, and release it. Today, GPT-2 seems hilariously quaint.2021: The OpenAI team that built GPT-3 leaves the company, citing “industrial capture” in the wake of Microsoft’s $1 billion investment, along with “concerns over AI safety.”
2023: Google and Amazon invest a combined $6 billion in Anthropic, which now offers LLM APIs very similar to OpenAI’s. Is Anthropic far more safety-focused behind the scenes? Maybe! Even probably! …But do they really look all that different to the casual observer? Not so much.2023: OpenAI releases GPT-4. People immediately contend that it has “sparks” of AGI, and/or “ignited the fuse” of AGI. But after eight months of engineers trying to get it work in production … while it remains true that GPT-4 is capable of truly magical moments … let’s just say that such claims are no longer a popular genre.
Again, the question is not whether these fears are actually misguided. Deep RL hit a wall, but it’s widely believed that the fabled new “Q*” breakthrough has something to do with combining RL and transformers. The point people have been warning darkly about the imminent dangers of accelerated AGI research for twenty years, and we have yet to see anything even remotely like a real risk … so can you really blame the audience for being increasingly skeptical? Shouldn’t the doomers have waited for, y’know, if not a doom, then at least an actual danger?
It’s true that if we are on an unchecked exponential growth curve, the time between early ripples and tsunamis can be very short. But “unchecked exponential growth” rather than “bog-standard S-curve” is both an extraordinary claim that therefore calls for extraordinary evidence, and an unfalsifiable one; if progress is slower than expected, you just claim that yeah, sure, the exponent was a little smaller than you first thought, but next year it’s totally going to hockey-stick.
In the interim, alas, the boy keeps shouting wolf — because he really thinks there’s one at the door! — and inadvertently teaches the village to ignore his cries. I’m an AI vroomer, but I support the doomers in principle… which is why I’d like them to be less ridiculous today. That way they might still be relevant on that most-likely-distant day when we may actually need them. Is that really too much to ask?
In my experience, going back to the early 80s, AGI has always been 5 years away, and will likely continue to be so for some time.