Modern AI is acclaimed as the big new — maybe the biggest new — breakthrough technology. (I fully agree.) OpenAI’s ChatGPT reached a million users faster than any other product or service in history. AI art is the most culturally controversial technology in years. The New York Times called ChatGPT a “Code Red” for Google’s mega-bazillion-dollar search business. Many, many billions have been, and are being, invested in AI.
So why do today’s AI products still seem so … underwhelming?
Their poster child is Jasper.AI, which uses GPT-3 and diffusion models to generate marketing copy and images, has an ARR approaching $100M, and recently raised $125M at a $1.5B valuation. Very respectable! …But not really as staggering as you’d expect from the vanguard of the most revolutionary technology since the smartphone.
Jasper’s tag line is “Create amazing blog posts / marketing copy / sales emails / SEO content / Facebook ads 100 times faster.” Again, the total addressable blogging / copywriting market isn’t small … but we’re not talking Uber here. Much less Google.
Of course, fundamental AI companies are much hotter. Look no further than Microsoft’s reputed $10B investment in OpenAI at a $29B valuation. And it seems like everyone and their dog is trying to build AI tooling and platforms. But actual products? Well, other such companies, comparable to Jasper, include … uh. (Looks around.) Um. That’s pretty much it, today, actually.
But but but “Code Red” for Google Search! Says The New York Times! I mean … maybe. Eventually. In the fullness of time, with new and better LLM tech. But certainly not today, as Gary Marcus points out:
“Large language models … are not databases; they are text predictors, turbocharged versions of autocomplete. […They] are also wicked hard to update, typically requiring full retraining, sometimes taking weeks or months.”
Of course, investing heavily in fundamental technologies, then waiting years to reap the benefits of gargantuan profit machines that they enable, has long been a Silicon Valley modus operandi. Of course, there’s a long slow disjoint between tech breakthroughs and the creation of mature, polished products.
…But there’s something else going on here, too. Generative AI isn’t just breakthrough new tech; it’s also super weird new tech, in that it is — astonishingly — genuinely creative. (Stochastically, admittedly. But still!)
The irony is that the immediate knee-jerk reaction is that people want it not to be. Folks, ChatGPT isn’t “lying” to you; it’s being creative! That’s what it does! Which is amazing, breathtaking, mind-blowing —
—oh, so it turns out you don’t actually find creativity all that useful?
…Have you tried telling your artist friends that?
Go ahead. They won’t be surprised. The business world values reproducibility and legibility far more than creativity. Businesspeople may pay lip service to creative work, but they won’t actually fork over much money for it. Artists, writers, musicians, etc., have always been direly poorly paid, except at the absolute apogee of their fields. Ask any artist, writer, or musician.
Heck, ask me, I’m a writer. I have an epic (and deceptively hard) science fiction novel — about AI, appropriately enough; entitled Exadelic — coming out in September from, not just a major publisher, but the world’s premier publisher of SF. (The book just got a cover, too. Isn’t it gorgeous?) Want to know how much I got paid for it?
An advance of $35,000, which gets spread over two years. By novelist standards, this is extremely good! (Especially compared to the ill-fated “average professional author,” who pulls in less than $10K a year from writing. Most books do not even “earn out” their advances.) But what was, for me, pretty near a best-case publishing scenario … is only a small fraction of what I make from software engineering.
Generative AI doesn’t have an enormous product market largely because there just isn’t that much demand for most creative work, relative to its supply. Another painful irony, especially if you’re an already-undervalued artist, is that the latest AI breakthrough is in fields already experiencing an imbalance of immense supply and limited demand. This is true of new subfields opening up, too, like voice work:
None of this means modern AI is anything less than a series of massive and massively important breakthroughs, to be clear! It just means that extraordinary capabilities of generative AI are — so far — largely orthogonal to what today’s businesses value.
Of course LLMs also have more reproducible abilities, like transforming data from one format to another. And, especially, their ability to generate (and explain) one very specific kind of highly stylized text: software code. It seems likely GitHub Copilot’s ARR leaves Jasper’s somewhere way back in the dust.
While ChatGPT and Stable Diffusion have understandably grabbed the world’s attention, there’s far more to modern AI. See e.g. “DeepMind's protein-folding AI cracks biology's biggest problem” and “Unlocking de novo antibody design with generative artificial intelligence.”(!) But modern media generative AI is still basically like having an infinite number of high-school interns on tap. This is amazing … but is it productive? Do high-school interns usually create net economic value? They can, if carefully curated, in specific contexts, such as that seized by Jasper — but mostly the answer is no.
(Side rant; quantified economic productivity is a blinkered and limited metric. “When a measure becomes a target, it ceases to be a good measure.” Many technologies vastly increase human happiness without increasing GDP. For instance, many millions are far happier working from home; “commuting is the daily activity most injurious to human happiness.” We could try to quantify this, via the salary premium they would require for a similar in-office job … but last I checked the pandemic flight from offices was not accounted for as a gargantuan boost to GDP. Similarly, the lives of retail employees, security guards, truck drivers, etc. were enormously improved by smartphones, which mitigate their jobs’ relentless boredom … but that’s even harder to measure, because many minimum-wage employees need money much more than well-paid white-collar work-at-homers, so would feel grimly compelled to take an only marginally better-paid no-phone job. “If we can’t measure it, therefore it doesn’t matter” is what I call the scientific fallacy. It comes up a lot in today’s world. Just because a technology’s benefits aren’t captured in GDP numbers doesn’t mean they aren’t real, huge, and important. The much improved day-to-day on-the-job happiness of people over the last few decades is only one of many examples.)
The italicized above paragraph is an aside, though, because Microsoft and Google and Amazon and venture capitalists everywhere aren’t betting on profound but immeasurable improvements to the human condition. (Although I do think we’re already seeing those, and we’ll see many more.) What they’re really betting on is the future. I got a sneak preview of as-yet-still-unreleased GPT-4 three months ago, and it blew my mind, and the (very impressive) minds of everyone else in that room. At worst, GPT-4 seemed more like having an infinite number of college interns on tap, rather than high-school … and that’s quite a step function.
Nobody is betting their future on AI because of Jasper’s ARR. They’re doing so because they can sense qualitatively that this is a revolutionary breakthrough, and, even more importantly, because the pace of the field is absolutely breakneck right now, and shows no signs of slowing down. I encourage you to subscribe to Jack Clark’s ImportAI and Davis Blalock’s Davis Summarizes Papers to get a sense of just how fast things are moving. VCs, as always, are betting on velocity and ferocity, rather than position. Personally I’m 99.7% sure that this time they’re absolutely right.
That isn’t to say there isn’t any room for skepticism. There always is—
but even the bears are mostly arguing about the trajectory, not direction, of the future. As such I’ll leave you with the ultimate bull case: OpenAI’s original business plan. (Substack won’t let me embed a YouTube video cued to a particular time, alas, so either click here or fast-forward to 31 minutes, 35 seconds.) Watch, marvel, and chuckle … a little uneasily.