The question is, what is The Question?
'Tis it nobler in the mind to take arms against a zeitgeist of troubles?
I suspect when the industry looks back on 2023, it will be remembered first as the year of GPT-4, but second, more memorably, as The Year Everyone Freaked Out. I mean:
Clockwise: the New York Times, Financial Post, Wall Street Journal, and The Economist. Not generally considered an assembly of spurious sci-fi soothsayers! But those headlines were the zeitgeist, not so long ago. This time last year I attended a clutch of AI events in San Francisco. If you’d told anyone at any of them that one year hence the state of the LLM art would remain basically unchanged, they would have laughed aloud … unless contempt compelled them to not bother replying at all.
Yet here we are. GPT-4-Turbo is better than GPT-4, sure, with a longer context window … but it’s not much better. (And remember, GPT-4 finished training in August 2022, ~18 months ago.) Claude 2 is … not as good. Google spent a cool billion dollars building Gemini Ultra … which is reportedly a bit better than GPT-4, but not much better, and which it has not released and has no apparent plan to release.
OpenAI Kremlinology has it that GPT-5 is in the early stages of en route—from what we can tell—and yes, maybe it will be another transcendental breakthrough. Maybe. But let’s step back a minute here. How on earth has one single company, however brilliant its people, remained the one great hope / fear for the most revolutionary / hyped wave of technology in at least a decade, after an entire eruptive year?
GPT-4 cost an estimated $100 million to train. There are many tech companies for which that is not a large line item! Why has no one been able to exceed it since? Is the knowhow really that rare and magical? Are GPUs that scarce? (Surely they can’t be, given the pace at which Meta alone is acquiring them.) Have legal concerns, or safety concerns, convinced everyone to slow down simultaneously? (Again, I give you as counterexample: Meta.) Is data that hard to accumulate? (I mean, Google owns YouTube, to which ~500 hours of video are uploaded every single minute.)
In short: even if this is just the pause before the storm, why the pause?
I know I keep harping on this, but that’s because if the answer to the question “is AI on a path of continuous exponential growth?” is yes, then it is by far the most important question in the world.
But if it’s no, if instead AI is in the midst of something more like a stepwise series of S-curves, as I've long strongly suspected, then … well, then we’re in one of those ambiguous confusing eras when the most important question in the world is not at all obvious, as per most of human history. (Other strong contenders: “Will China and the US wage war?” “Will climate change be stopped in time to prevent climate catastrophes?” “When will sub-Saharan Africa get rich?” “Will the next pandemic be natural or artificial?” “Will aging turn out to be a solvable bug / series of bugs?” “Will anyone go nuclear?”)
I suspect the main reason sizable cohorts of people have been eager to believe that AI is The Question is not self-interest, or self-importance, or credulity (…although as a practicing science fiction author I would like to point out that their numbers, prominence, and volume are most definitely a testament to the power of SF.) I think that what it offers, what any The Question offers, is something in terribly short supply these days; clarity. Few of us like to embrace ambiguity, at heart. Most of us would like to know that there is one question that matters most.
But I suspect that we should all be so lucky. Instead, to quote The Protagonist in Tenet, “we live in a twilight world,” in which we muddle on, through the sociopolitical muck. That, I’m afraid, is the sad lesson of the S-curve.