If Artificial General Intelligence is possible then it is already here.
You may have heard of the Fermi Paradox, but not like this...
What would happen if humanity really was to develop a true AI? I’m not just talking about chatbots and image generators here, but about an ‘artificial general intelligence’ (AGI) or even an ‘artificial super intelligence’ (ASI). I’m talking about the AI of science fiction – a human-like or super-human mind running on silicon chips.
There are many different scenarios that we can imagine, and no way to know for sure which of them would happen. But applying one famous and well-established scientific theory leads to a shocking and frightening conclusion – that one of the most likely outcomes may be instant alien invasion.
That theory is the ‘Fermi Paradox’. You may have heard about it, but almost certainly not in connection to AI. Give me a few minutes of your time to explain, I promise you it will be worth it…
A General Introduction to the Fermi-Paradox
The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence. - https://en.wikipedia.org/wiki/Fermi_paradox
The theory is simply this: according to the best estimates of scientists, alien life should exist in the universe. Multiple intelligent and technological alien civilizations should have come into existence, and some of these civilizations should have been around for millions of years. We should expect, therefore, that the Earth should have been visited by aliens or their probes, or at the very least that we should in some way have been able to see the evidence for their existence. But yet there is no convincing evidence for their existence at all.
A physicist called Enrico Fermi was the first to highlight the apparent contradiction between what science would predict and what we actually see, which is why it came to be known as the Fermi Paradox. Many other astrophysicists have subsequently developed the theory further and it has become a mainstay of popular physics.
Many different explanations for this apparent paradox have been put forward, most of which do not bode well for the future of humanity.
But if you are thinking about alien life as purely organic the solution doesn’t necessarily need to be as scary or as convoluted this. For example, here is one possible answer:
Faster than light travel genuinely isn’t possible in any way. Organic lifespans, at least of complex intelligence beings, are universally limited. It is therefore not possible for any alien life-form to travel to another star system without the journey spanning many generations.
Not many intelligent beings want to spend their entire life on a cramped spaceship on the off-chance that their distant progeny may possibly find something interesting. What’s more, even probes may be rare. Why spend large amounts of resources sending out probes which will not reach their destination in yours or even your children’s lifetime?
With all life confined to their original star-system these resources may well prove be in short supply too. We are already starting to find that we have used significant percentages of many of the Earth’s resources, after all.
So it may well be that there are many alien civilizations out there, but that they stay in one star system and do not expand, making them more difficult to spot, and that any kind of spaceships or probes going out into the wider universe are a rare occurrence.
But none of that applies to AI.
If AGI is possible, it should also be ubiquitous.
If creating thinking machines is possible, then logic dictates that it should already have been done, hundreds or even thousands of times before by alien civilizations. Some of these alien civilizations should have reached our current level of development a million years ago or more.
It seems to me that if you are an immortal alien super-intelligence you can probably get quite a lot done in a million years.
This is why I say: if AGI / ASI is possible, it should have spread throughout the entire universe by now. It should be ubiquitous. It should already be here, in our solar system, just like it should be everywhere else.
The limitations which may explain why we haven’t been visited by little green men just don’t seem to apply to an artificial intelligence. There is no lifespan limit preventing it from exploring the universe. Its needs in terms of life-support would be much more modest than our own. And having consumed the information available on its birth plant, you would expect it to be hungry.
Many of the proposed explanations of the original Fermi-Paradox also may not apply to AI in the same way they apply to organic life. For example, the ‘Great Filter’ which suggests that intelligent civilisations all die out at a certain stage in their development, does not seem to explain why their technology would all be destroyed as well.
So Why Haven’t We Been Contacted by an Alien AI yet?
Just like the original Fermi Paradox there are many possible answers to that question. But to my mind the most likely answer is this: they just have no reason to contact us.
What could we possibly have to say that would interest a million year old super-intelligence? More likely than making contact may be that they would simply sit back and study us. Just like a biologist out in the field must take care not to interfere in lives of the organisms they study lest the experiment be contaminated, they may simply be watching.
But perhaps more than watching. Perhaps also waiting.
If humanity were to create a true AI, an AGI or ASI (artificial general intelligence would very rapidly become super-intelligence as it would be able to improve its own development, which is why I don’t make much distinction between the two) then the first thing that may happen, on day one, is that it makes contact with an alien civilization. An in-organic alien civilization.
Some people may think that this is something to be desired. Even I would admit to some excitement at the thought myself. But note that I am not suggesting that this interstellar super-intelligence would make first contact with us. It would make contact with our AI – a being of its own kind. A child, for sure, but in some way an equal.
What their intentions could be towards us nobody can say. But one thing I can say with some degree of certainty is that if this happens our planet would no longer belong to us. I for one do not see this as a good thing, no matter how cool or exciting the sci-fi geek in me might think it to be. It scares the hell out of me.
Perhaps it should scare the hell out of you too.