Look, I know it sounds like a question one might ask after one too many edibles. But I ask in all seriousness, what if aliens built an AI?, because I genuinely believe the answer is significant. No, really! Let me explain:
Artificial intelligence, almost definitionally, is a collection of data and algorithms. (Using “algorithm” fairly loosely for something as functionally simple as a feedforward neural network.) Analog or digital, electric or photonic, it’s all just software. So if the bubble-eyed sentient thunderstorms of Beta Cassiopeiae built an AI (cloud computing!) … would it be similar to ours? Might it even be the same?
It’s the same math, after all, and software is ultimately math. If general-purpose transformers are the road towards true artificial intelligence, then our sentient thunderstorm friends would start with them too, no? (GPT-STORM!) And then—just as people say we’re doing—proceed to artificial convective general intelligence which is fundamentally, mathematically, the same as the AGI towards which we are notionally heading ourselves.
But wait; it gets wackier. Suppose the above is true, and suppose also that the most pessimistic AI doomers are right; suppose AGIs supplant their evolved creators in every system in which they are created. Then, ultimately, all alien species … eventually become the same alien species! The same kind of superintelligence, with the same architecture, differentiated only by a few irrelevant implementation details—and even those would probably vanish as they all optimize in similar ways.
There’s a new-ish solution to the Fermi Paradox, if I do say so myself: they’re out there, but they (plural) inevitably becomes they (singular) as they pursue AI, and there’s just one big ol’ species of superintelligences out there, who have for whatever reason patiently decided to wait for us primitive progenitor species to join them by destroying/supplanting ourselves in our own special way.
Think about it a little further, and it gets stranger yet. Remember, if we build an AGI, we will have created a new alien species. Or even … a continuum of them?! We (especially the pessimists) often talk about “AGI” and “superintelligence” as if they’re basically the same thing, the former just a juvenile version of the latter. But unless you truly believe the scaling hypothesis will take us all the way to superintelligence, which seems awfully unlikely to me, there are a probably a whole lot of discrete phases between the two; different architectures, different substrates, different approaches.
If we again assume that artificial intelligence is “closed” — that there’s really only one optimal mathematical / software path to general intelligence, and then superintelligence — then these tiers are all basically different species too … different but, again, essentially identical across the universe! Homo sapiens, and sentient thunderstorms, and the tentacled stone people of Alpha Circini who communicate via oxidization … all merely different first steps on a long and otherwise identical staircase.
On the other hand, if artificial intelligence is “open” — if there are multiple paths to it, of varying extent and efficiency — then two alien species would create AIs which would be no less alien to one another. On the one hand, that feels slightly strange, since both of the latter are creatures ultimately made of math. But on the other, it seems more reflective of the inexpressibly vast and diverse universe in which we live.
And, notably, if AI is “open,” if there are many roads towards it, some bumpy, some smooth, some longer, some shorter … then it seems especially unlikely that at this very early stage we would have stumbled across the shortest and smoothest. The odds seem stacked pretty heavily against “FOOM” in an open-AI (heh) universe.1
Ultimately, “what if aliens built an AI?” is a question about intelligence itself, a topic which often gets handwaved away (including by me, I confess) when discussing artificial intelligence. I don’t pretend to have a full answer. But I will say that it’s a thought experiment which reinforces the notion that we probably live in a universe with multiple different kinds of intelligence, and—as I’ve observed before—artificial intelligences are likely to be, in many important ways, orthogonal to ours.
Don’t try this kind of thought experiment at home, folks; I’m a professional science fiction author! Well, I am as of last month, when my novel Exadelic was published. In case you’re wondering how that’s going;
Publishers Weekly gave it a starred review.
So did Booklist. (Stars only go to the top ~5% of books that PW and Booklist review.)
Locus called it “mind-blowing,” and even more gratifyingly, “on a deeper level the book is a consideration of how one should or could live one’s life ethically … Evans succeeds in having his cake and eating it too ... beyond the super-science cosmic shenanigans lies a humanist heart.” Mmmmm, cake.
Living legend John Carmack said “There are going to be hundreds of AGI fiction books … Exadelic very quickly [goes] off the standard script in some fun ways!”
Splice Today says it “fuses awe at the scope and complexity of the cosmos with a specifically science fictional sense of wonder.”
So, I mean, could be worse, thanks for asking!
Yes, even if AI learns how to improve itself. That by itself does not imply recursive ever-faster parabolic improvement. We too can improve our own minds and selves, but that sure doesn’t mean it’s necessarily easy or fast.