Discover more from Gradient Ascendant
Who's Who in A.I.
Dramatis personae for our brave new era. Well ... this month's.
In recent conversations, in person and online, I’ve realized there’s a huge amount of interest in modern AI … and that many outside the field lack context assumed by those inside. Not technical context, but a high-level understanding of the AI industry today, who its major players are and what they do. It’s easier to grok the state of the art when you understand the state of the industry. One can pick that up by osmosis, as I (think I) have, but offering newbies a single cheat sheet seems helpful. Hence this post. (And also I’m too busy this week for a technical-deep-dive-for-non-technical-people post.)
the industry is moving so fast much of this may be obsolete in six months (though I think it’s unlikely the major players will change in that time.)
this is obviously a personal and idiosyncratic list, the commentary even moreso
I still think of these posts as journalism, and as a journalist I’ve always tried to avoid (conscious or unconscious) bias, so anyone I consider a friend or friendly acquaintance will be mentioned either only in passing, or not at all.
Without further ado, I give you today’s AI dramatis personae:
Gradient Ascendant is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Obviously we start here. Love them or hate them, for good or for ill, OpenAI are unquestionably the most potent engine of change in the tech world today … and you can at least make a coherent case for dropping “tech” from that phrase.
Many had believed “Silicon Valley,” that baroque superorganism spread across fifty miles of California, to be yesteryear’s entity, a dinosaur in our brave new decentralized world with the gospel of tech startups disseminated everywhere. Many were wrong. OpenAI is pure unadulterated Valley, co-founded by (among others) Elon Musk, Peter Thiel, LinkedIn’s Reid Hoffman, Stripe’s Greg Brockman, and Y Combinator’s Jessica Livingston, with Facebook alum / Quora CEO Adam D’Angelo on its board.
They have been controversial since birth. Their original business plan was “build a superintelligence and ask it how to make money.” (Not a joke.) Musk parted from them in one great schism. Another led to the birth of Anthropic, below. Co-founder Andrej Karpathy left to run AI at Tesla for five years — and returned. In 2019 OpenAI abruptly abandoned their nonprofit status and abandoned their “open”ness, declaring “Due to our concerns about malicious applications of (GPT-2, basically a toy compared to today’s models) technology, we are not releasing the trained model.” Much criticism and/or mockery followed.
Amid all this tumult they reeled off an entire slew of extraordinary accomplishments that basically defined modern AI. Their GPT-3 large-language-model family powers many-to-most actual AI products, including Copilot, Jasper, and of course their own ChatGPT, the fastest-growing technology application of all time. Their diffusion model DALL-E initially defined generative AI (though Midjourney now seems the vanguard of image generation,) and it too offers a public API. They’ve also released other odds and ends, such as a (genuinely open) speech-to-text model named Whisper. All this with their old tech; GPT-3 was released in June 2020. Late last year, private sneak previews of GPT-4 set the tech whisper network alight with barely credible rumors of its extraordinary capabilities. (GPT-4 is believed to power Bing/Sydney. OpenAI, as is its wont despite its name, does not seem eager to confirm or deny.)
Their future looks supernova-bright, which is perhaps why they just published a surreal blog post regarding their plans for “AI systems that are generally smarter than humans” … and beyond(!) Meanwhile in other, almost-as-worldshaking news, they announced they are … uh … partnering with Bain Consulting to bring ChatGPT to enterprises? Well, when the world’s your oyster, I guess you get to both have your transformative cake and eat your enterprise gruel, and OpenAI gets to be every face of modern AI to everyone. This recent finger-in-every-pie flourishing has been funded by a whopping, and remarkably structured, $10 billion investment from — and, obviously, increasing partnership with —
How the mighty are risen again. Microsoft, which once seemed at real risk of becoming a derelict embarrassment like IBM, is at AI’s forefront. Microsoft Research keeps churning out new AI papers, including the notable VALL-E voice synthesizer. They own GitHub, whose Copilot is perhaps the most widely used, and almost certainly the most concretely useful, modern AI product. (Disclaimer/disclosure: I directed GitHub’s Archive Program, though I was never an employee.) Their machine-learning-as-a-service offerings are on par with Amazon and Google’s clouds.
Most important, though, is their increasingly close partnership with OpenAI. Microsoft’s billion-dollar investment way back in 2019 now looks brilliantly prescient, and — along with their other wins — makes Satya Nadella seem a turnaround artist only one notch below Steve Jobs. How crazy good has his stewardship been? It’s 2023 and people can’t stop talking about Bing.
Mysterious in both birth and deed, Anthropic calved out of OpenAI a few years ago and are now, at least in some senses, their nearest peer. I don’t pretend to know the details of the schism, but the fact Anthropic was originally funded by $500 million from none other than disgraced crypto tycoon Sam Bankman-Fried — yes, really — my guess is it had something to do with different perspectives on effective altruism.
Since then they’ve done really interesting work. “Claude,” their GPT-3 competitor, works within their “Constitutional AI” guardrails … basically, trained AIs to “align”/supervise other AIs, ensuring they don’t go off the rails — which seems surprisingly effective. Effective enough that they’re now acting as the foundational platform for other startups, and recently won another $300 million in funding from…
Google gave us modern AI, but — again, for better/worse, good/ill — they cannot take it away. They invented the transformer, the T in GPT, the key technology that brought us modern generative AI. They had a mindblowing chat AI, LaMDa, quite some time before ChatGPT; their Imagen diffusion model was notably superior to OpenAI’s DALL-E (for instance, it could spell); they have published reports and samples of absolutely incredible work such as their MusicML.
…But that’s the thing: reports and samples. Nobody outside the Googleplex could touch, use, experiment with, or build on any of these things. In fact, nobody still can. Better/worse, good/ill, etc., they have been left in the dust by the willingness of OpenAI and Microsoft and especially Stability (see below) to put their work into the real world, see how people use and abuse it, and iterate accordingly. Which leaves them with not just their AI lead under threat, but their golden-cash-cow search engine, suddenly losing mindshare to Bing (Bing!) for the first time ever.
Worst of all, they’re losing talent. It seems like half of the researchers who were working at Google Brain this time last year have since moved on … often to OpenAI. Mighty Alphabet finds itself, unaccustomedly, on the wrong side of a feedback loop. But don’t write them off yet! They have a secret weapon, one which quietly could lay claim to the greatest agglomeration of AI talent on the planet; DeepMind, using AI to make fundamental advances in biology and even mathematics(!) … which is wholly owned by Alphabet.
Stability is my personal favorite of all the AI companies extant; it, unlike any of these others, is genuinely AI for the people. (Stipulated: better/worse, good/ill.) Their Stable Diffusion image generator, built in collaboration with the Ludwig Maximilian University of Munich, is genuinely open-source and open-weight. Anyone can download and run it on their MacBook Pro, no registration or supplication required. And with a $100 million war chest, and all the momentum in the world, they have far more than Stable Diffusion in mind.
By “they” I really mean their CEO, Emad Mostaque, by far the most interesting character among all the personalities here; born in a refugee camp in Jordan to Pakistani parents, raised in Bangladesh, studied computer science at Oxford, worked at hedge funds in the City of London while moonlighting as an Islamic scholar and advisor on Islamic extremism, then went through a cryptocurrency phase before funding the training of Stable Diffusion and becoming an AI impresario. You must admit it’s not a boring CV. Stability has (one set of) their sights locked on Hollywood, and I’d be surprised if they didn’t wind up making the world a more interesting place. (Stipulated that sometimes this is phrased as a curse.)
Yes, really. Their name is terrible, but their service is crucial. HuggingFace is basically GitHub for AI — well, code itself still lives on GitHub, but one of the weird things about AI is that it doesn actually involve very much code at all. HuggingFace is for everything else: models, datasets, demos, libraries, etcetera. Take Stable Diffusion: in principle, it is open-source and could be transferred person-to-person, but in actual practice, HuggingFace is the distribution hub for it and all its dependencies.
Facebook’s AI team is in the curious position of frequently releasing very expensive and large-scale work that … nobody else seem to care about all that much. Galactica was a painful debacle. LlaMa seems technically impressive but you have to apply for access and there doesn’t seem to be much buzz around it. CICERO, their model that can win at the strategy/negotiation game Diplomacy, excited a brief round of very impressed commentary … but nothing since. They seem to be trying to plot a middle course between Google’s “never release anything until scooped by multiple other companies” and Microsoft’s “roll up to see the mighty Bing!”, and it seems an awkward line to walk.
Other established AI product companies
Whether or not they count as “unicorns,” there’s a growing cohort of AI companies which are not tech titans but have grown beyond the “scrappy little startup” state. Midjourney seems to have taken off and established itself as the thinking person’s image generator, which I wouldn’t have expected (although ControlNet, see below, on top of Stable Diffusion may change that.) Jasper is pulling in something like $100M a year. And while there are a lot of companies offering frameworks and tooling and so forth — which is to be expected at this stage —
…to my mind Scale is the most substantially established of them.
A vast burbling ecosystem/community of enthusiasts and startups
Everyone is gaga for AI. Entire communities, nay, entire ecosystems, have erupted around generative AI images alone. Stable Diffusion — which is still not especially easy to get running! — was downloaded two million times last month alone. Startups are growing like mushrooms after rain, raising many millions in what is otherwise a grim time of VC and startup pullback; a good (though not exhaustive) way to follow along is the generative-ai tag at my old stomping ground TechCrunch.
This is especially true in San Francisco. GitHub Copilot creator Alex Graveley hosted an “AI Tinkerers” meetup last month in SF, expecting a turnout of maybe 20, and…
Hayes Valley in downtown San Francisco has received the nicknames “Cerebral Valley” or (better) “Bayes Valley” for its density of AI people/research. Local incubators are similarly all over AI. And it’s a field so new and yet undiscovered that individuals can make a huge impact, like Lyumin Zhang and Maneesh Agrawala, who gave us ControlNet, or Roon, or the inimitable Gwern. Meanwhile, the UK too is punching above its weight, thanks to Oxbridge, and somewhere in Texas, the legendary John Carmack is hard at work on Keen, which — zagging where others zig — has zero to do with LLMs. All these are but the most prominent edges of a massive seething frenzied AI iceberg.
Modern AI was born from academia, and to a perhaps surprising degree, the ivory tower still guides the industry. Advances in the field are still primarily measured by scientific papers, not product launches. Academics at top-tier schools regularly collaborate with industry researchers … and why wouldn’t they? Most of those same researchers were their fellow grad students not long ago. But capitalism is working its Borg-like dark magic and draining brains from the dreaming spires; even pure researchers find it hard to resist the gravitational pull of an obscene salary.
I have only a passing knowledge of AI for biotech but I know there’s a lot going on. DeepMind is the most prominent, but there are a whole passel of companies who have raised very large sums of money to use AI to study our genomes, and our genes’ expression into biology, with an eye towards generating extremely targeted and powerful therapies.
AI Safety and AI Ethics
Blissfully naive outsiders tend to think that these are synonyms. Oh you poor sweet summer children. There is nothing more contentious, nothing more controversial, and, frankly, more insane (and I use that word advisedly) in the field than Safety, Ethics, and the tension between them.
Briefly, AI Safety people worry about superintelligences destroying humanity, and AI Ethics people worry about bias, discrimination, AI-powered systems which might render automated judgements on marginalized people, and the ethical provenance of AI training data. AI Ethics people tend to contemptuously dismiss AI Safety as not actually a thing; AI Safety people would probably concede that AI Ethics are a thing, if you could get them to even think about the subject, but since they believe AI Safety is more important than climate change, nuclear war, or pretty much any other issue ever in the history of humanity, you probably won’t distract them with mere ethics.
What they do have in common, though, is that both groups tend to attract … let’s just say … strongly opinionated people, who tend to work either singly or in small groups, but are interwoven into a semi-coherent mass. Of late the Bay Area is practically speckled with “independent alignment researchers.” (“Alignment” being basically code for “AI Safety.”) Most prominent and most pessimistic among them is Eliezer Yudkowsky, recently in the spotlight for a much-shared podcast appearance, but there are many others. Holden Karnofsky, an OpenAI board member and (until recently) purse-string-holder at the multibillion-dollar Open Philanthropy fund, just quit the latter role to pursue AI Safety research full-time. The great Internet writer David “Meaningness” Chapman just published what is basically an AI Safety / anti-AI book at betterwithout.ai. It is safe to say that concern and fear are no less rampant than excitement and optimism.
Obviously treating a nation of more than a billion, one that shares the technical forefront with the US and Europe, with a single broad brush is a grossly unfair oversimplification … but it is also a reasonable depiction of US industry perceptions. Everyone agrees that China has very advanced AI and AI is very big in China. Everyone knows that Chinese researchers constantly publish AI papers. But very few of these seem to cross over into US industry awareness, much less interest, much less integration. TechCrunch’s Rita Liao’s description of it as “a parallel generative AI universe” seems pretty apt.
That’s this month’s version! …By summer things might be quite different. As is screamingly obvious, the entire field is in a remarkable state of eruptive ferment.
Gradient Ascendant is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.