š½š¤What if aliens built an AI?š¤š½
This is a less hypothetical question than one might expect.
Look, I know it sounds like a question one might ask after one too many edibles. But I ask in all seriousness, what if aliens built an AI?, because I genuinely believe the answer is significant. No, really! Let me explain:
Artificial intelligence, almost definitionally, is a collection of data and algorithms. (Using āalgorithmā fairly loosely for something as functionally simple as a feedforward neural network.) Analog or digital, electric or photonic, itās all just software. So if the bubble-eyed sentient thunderstorms of Beta Cassiopeiae built an AI (cloud computing!) ā¦ would it be similar to ours? Might it even be the same?
Itās the same math, after all, and software is ultimately math. If general-purpose transformers are the road towards true artificial intelligence, then our sentient thunderstorm friends would start with them too, no? (GPT-STORM!) And thenājust as people say weāre doingāproceed to artificial convective general intelligence which is fundamentally, mathematically, the same as the AGI towards which we are notionally heading ourselves.
But wait; it gets wackier. Suppose the above is true, and suppose also that the most pessimistic AI doomers are right; suppose AGIs supplant their evolved creators in every system in which they are created. Then, ultimately, all alien species ā¦ eventually become the same alien species! The same kind of superintelligence, with the same architecture, differentiated only by a few irrelevant implementation detailsāand even those would probably vanish as they all optimize in similar ways.
Thereās a new-ish solution to the Fermi Paradox, if I do say so myself: theyāre out there, but they (plural) inevitably becomes they (singular) as they pursue AI, and thereās just one big olā species of superintelligences out there, who have for whatever reason patiently decided to wait for us primitive progenitor species to join them by destroying/supplanting ourselves in our own special way.
Think about it a little further, and it gets stranger yet. Remember, if we build an AGI, we will have created a new alien species. Or even ā¦ a continuum of them?! We (especially the pessimists) often talk about āAGIā and āsuperintelligenceā as if theyāre basically the same thing, the former just a juvenile version of the latter. But unless you truly believe the scaling hypothesis will take us all the way to superintelligence, which seems awfully unlikely to me, there are a probably a whole lot of discrete phases between the two; different architectures, different substrates, different approaches.
If we again assume that artificial intelligence is āclosedā ā that thereās really only one optimal mathematical / software path to general intelligence, and then superintelligence ā then these tiers are all basically different species too ā¦ different but, again, essentially identical across the universe! Homo sapiens, and sentient thunderstorms, and the tentacled stone people of Alpha Circini who communicate via oxidization ā¦ all merely different first steps on a long and otherwise identical staircase.
On the other hand, if artificial intelligence is āopenā ā if there are multiple paths to it, of varying extent and efficiency ā then two alien species would create AIs which would be no less alien to one another. On the one hand, that feels slightly strange, since both of the latter are creatures ultimately made of math. But on the other, it seems more reflective of the inexpressibly vast and diverse universe in which we live.
And, notably, if AI is āopen,ā if there are many roads towards it, some bumpy, some smooth, some longer, some shorter ā¦ then it seems especially unlikely that at this very early stage we would have stumbled across the shortest and smoothest. The odds seem stacked pretty heavily against āFOOMā in an open-AI (heh) universe.1
Ultimately, āwhat if aliens built an AI?ā is a question about intelligence itself, a topic which often gets handwaved away (including by me, I confess) when discussing artificial intelligence. I donāt pretend to have a full answer. But I will say that itās a thought experiment which reinforces the notion that we probably live in a universe with multiple different kinds of intelligence, andāas Iāve observed beforeāartificial intelligences are likely to be, in many important ways, orthogonal to ours.
Donāt try this kind of thought experiment at home, folks; Iām a professional science fiction author! Well, I am as of last month, when my novel Exadelic was published. In case youāre wondering how thatās going;
Publishers Weekly gave it a starred review.
So did Booklist. (Stars only go to the top ~5% of books that PW and Booklist review.)
Locus called it āmind-blowing,ā and even more gratifyingly, āon a deeper level the book is a consideration of how one should or could live oneās life ethically ā¦ Evans succeeds in having his cake and eating it too ... beyond the super-science cosmic shenanigans lies a humanist heart.ā Mmmmm, cake.
Living legend John Carmack said āThere are going to be hundreds of AGI fiction books ā¦ Exadelic very quickly [goes] off the standard script in some fun ways!ā
Splice Today says it āfuses awe at the scope and complexity of the cosmos with a specifically science fictional sense of wonder.ā
So, I mean, could be worse, thanks for asking!
Yes, even if AI learns how to improve itself. That by itself does not imply recursive ever-faster parabolic improvement. We too can improve our own minds and selves, but that sure doesnāt mean itās necessarily easy or fast.