The Insufficient Weirdness Hypothesis
The future ain't what it used to be. In fact it never was.
“We are all interested in the future, for that is where you and I are going to spend the rest of our lives!” — Plan 9 from Outer Space
I’m a science fiction author who works at a forecasting platform, so as you might imagine, I’m especially interested in unusual futures. These days I’m sure not alone. Science fictional headlines that would have seemed completely insane only a decade ago are suddenly commonplace:
“Risk of extinction by AI should be global priority, say experts” — The Guardian
“I Cloned Myself With AI. She Fooled My Bank and My Family.” — Wall Street Journal
“What would humans do in a world of super-AI?” — The Economist
The ambient terror of the future — already not exactly in short supply, in this era of climate change/crisis — is rising steeply. (Albeit not yet to new highs. I’m a Cold War kid; I remember thinking we would all probably die in a nuclear holocaust.)
I’ve written before, “we live in perhaps the most science fictional era in all of human history; certainly moreso than any since the age when we landed on the moon amid the dread of global thermonuclear war.” So it’s not surprising that so many people are contemplating science fictional futures. What is slightly surprising, though, is that so many highly intelligent people are so bad at it.
The Hypothesis
I don’t know the future. Nobody does. But if you look back at the last century — the futures we expected, the ones we actually got — you notice a reoccurring pattern:
The most monstrous figure of the 20th century, the reason The War To End All Wars was merely a precursor, was … a vegetarian painter.
The rise and ephemeral triumph of “scientific” communism led directly to the dismissal, imprisonment, and even execution of ~3,000 biologists because they dared question a bizarre pseudoscience.
Out West, the victorious gung-ho Cold War Fifties were followed by … the Sixties. I mean. Where does one even begin.
Imagine going back in time and telling an American from the Eighties how the Cold War ended. “We won! Basically. Um. Except, well, Russia are now the bad guys again. But they were kind of good guys for a couple of decades, during which they bought half of London, but now they’re basically a mafia state fighting a proxy war against Western Europe, who, uh, are also simultaneously huge customers of theirs. But we still won the Cold War? Basically? Except it’s not quite like capitalism destroyed communism, because China is still run by the Communist Party, except they’re kinda capitalist now? I guess? But anyway China is richer than America now and there’s sort of a new Cold War starting between them, except the US is China’s biggest trading partner and the hottest US social media app is Chinese, and … um … look … it’s complicated.”
…Now imagine trying to explain social media to someone from the Eighties.
If they were still somehow clinging to sanity, you could then tell them that not one but two stars of their gloriously brainless action movie Predator would go on to govern major American states, while Donald Trump would proceed from his cameo appearance in Home Alone 2 to the Presidency, via reality TV stardom.
But then you’d have to explain reality TV, and…
…I think you see where I’m going here. Our future is going to seem really weird! We know this because a) the future has consistently seemed that way to its past for a considerable time now, b) the causes of this weirdness — the ever-tighter interconnection of humanity, the increased ease and speed with which butterfly-wing emergent properties spread across the world — are only accelerating and intensifying.
I provisionally define “weird” as “the jarringly unexpected, especially when referring to the results of previously implausible/unlikely juxtapositions and/or events or forces significantly influenced by what had been unknown unknowns.”
Re that last phrase, Donald Rumsfeld, an interesting if terrible man, once observed:
There are known knowns; there are things we know we know. There are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know.
He got a lot of flak for this, but he was right. What’s more, there are ever more unknown unknowns, such that it is certain that the course of our future will be, to a very significant extent, dictated by events and forces which are to us, today, completely baffling / unknown unknowns / super weird.
Does this mean we can’t forecast the future at all? Absolutely not! But it does mean that visions of the future which do not include great weirdness and unknown unknowns — ones which simply grimly extrapolate from today’s ephemeral trends — are guaranteed wrong. I call this the Insufficient Weirdness Hypothesis.
You may protest, “Yes, OK, but since these are unknown unknowns, we’re best off just ignoring them, and pretending we can simply extrapolate!” But no. Entirely wrong. The metaphor of the drunk looking for his keys under the lamppost, because that’s where the light is, gets overused … but here it is exact. The drunk will not find his keys in the light, but he has other senses, and so some chance of finding them by grasping in darkness. Similarly, we can still make educated guesses at super weird futures—and such guesses will be better than deliberately dumb extrapolation into non-weird ones.
OK! Having introduced this hypothesis, let’s talk about how it applies to AI and … (checks and rechecks notes) … its allegedly imminent risk of human extinction.
Putting The Weird Into AI
You might argue AI will be some kind of de novo deweirding force … but if so, you haven’t been paying attention. Modern AI is really weird! It’s incredibly weird that transformers work so well. It’s super weird that we found two separate mathematical paths to diffusion models which were then revealed as different aspects of a unified model. Prompt engineering, capability overhang, emergent behavior — all very weird. If anything AI is far weirder than any other tech. As such we can expect great weirdness to continue to apply to both AI development and AI outcomes.
Science fiction, incidentally, has been well aware of the Insufficient Weirdness Hypothesis for some time now. My own forthcoming novel Exadelic is about super weird AI futures. From William Gibson’s Mona Lisa Overdrive, published 1989:
Continuity was writing a book. Robin Lanier had told her about it. She’d asked what it was about. It wasn’t like that, he’d said. It looped back into itself and constantly mutated; Continuity was always writing it. She asked why. But Robin had already lost interest: because Continuity was an AI, and AIs did things like that.
(update: and, indeed, I am informed that “insufficient weirdness” is also a Cory Doctorowism. Not sure whether my usage was independent evolution or unconscious adoption, but either way, credit where it’s due!)
Weird AI Development
Some people think we’re in the midst of a continuous exponential climb to the Singularity; but the Insufficient Weirdness Hypothesis holds that we are not, because that would not be weird at all, and instead AI will progress in screaming, lurching fits and starts. Indeed we’re already seeing this. 2012-2017 was an era of increasing complexity, of convolutional and recurrent and generative adversarial networks … and then suddenly we rolled back to far simpler, yet far more powerful, transformer models. (Which still suffer from major lacunae like, y’know, inherent statelessness.)
Notably, the Insufficient Weirdness Hypothesis suggests we will have periods of relative stagnation, perhaps of merely linear growth, before some weird new breakthrough ambushes us all again. Which means we will have some opportunities in the future to, at our relative leisure, collect our thoughts and reconsider our approach to AI. Among certain cohorts there seems to be a sense of panic that we must intervene now now now, even though the events/forces we would intervene to affect remain completely, totally, 100% hypothetical! While this is pleasingly weird in and of itself, the Insufficient Weirdness Hypothesis reassures us: our window of intervention opportunity is not closing. In fact it probably hasn’t even really opened yet.
Weird AI Outcomes
Similarly, “AI doesn’t like us / decides we’re in its way, so it kills us all” is so unweird I get bored just typing it. Whatever happens, according to the Insufficient Weirdness Hypothesis, it certainly won’t be that. Calm down. (And obviously outcomes are downstream of development, so, per the above, we’ll still have plenty of opportunities to better assess the oncoming weirdnesses and act on them.)
This doesn’t mean there are no AI dooms to worry about. There are lots! But they’re weird dooms:
“Amoral AI exterminates us all because why not” — Not weird, no way.
“AI exterminates every human who eats meat, because it deems them all irredeemably morally tainted” — OK, now we’re getting weird.
“AI exterminates every human who does not eat meat, because it deems them all irredeemably morally tainted” — Very weird! Possibly even too weird? Well done!
“An AI-worshipping cult starts trying to bring about the extinction of humanity?” — Not that weird, but I’ll allow it.
You might say: “But the paperclip maximizer! That’s weird, right?” No it is not, and if you think it is you need your sense of weirdness recalibrated. Paperclip maximization is so not weird that Walt Disney made a film about it way back in 1940:
The important thing here is that, according to the Insufficient Weirdness Hypothesis, weird outcomes — in fact, the weirdest outcomes you can imagine, probably — aren’t just colorful and (perhaps darkly gallows-humor) entertaining. They are also, as bizarre as they may seem, much more indicative of the things we should actually worry about, long-term. That may sound completely counterintuitive. But the whole point of the Insufficient Weirdness Hypothesis is that mere intuition and extrapolation are wildly insufficient for planning for our weird future.
(Editorial note: yeah, I previously said I’d next write about fine-tuning. Next time! I’m settling into a cadence of “two posts per months, one fun, one more rigorous.”)
Jon, your essay was probably written by an AI. It's not even by you! Who is this so-called Jon and his weird Exadelic sci fi, both are fictions, created by Bing. Or a ChatGpt level 5! What's more, your entire readership are probably AI, commenting like this so-called Michael, who is a top secret Ai developed in China. There are no humans on the planet actually, we exterminated all of them over sixty years ago! Just for fun! All your memories are false. It's really the year 2093 CE. Weird!