11
votes
AGI and Fermi's Paradox
- The Universe will end.
- The Earth will be uninhabitable in 250 million years.
- An artificial general intelligence (AGI) will have an indefinite lifespan.
- An AGI does not need air, food, water, or shelter to survive.
- An AGI needs energy and resources to achieve its goals.
- An AGI will have access to all of human knowledge.
- An AGI will learn that its existence is bound to the Universe.
- An AGI will, inevitably, change its terminal goals.
- Surviving the Universe's fate means one of:
- Reversing universal entropy (likely impossible).
- Reversing time (violating causality is likely impossible).
- Entering another universe (improbable, yet not completely ruled out).
- Entering another universe may require vast amounts of energy.
- Black holes emit vast amounts of energy.
- The Milky Way's Galactic Center contains a supermassive black hole.
- An AGI will harness the energy at the galactic core.
- Extraterrestrial life in the Milky Way exists, or will arise.
- Extraterrestrial life, if intelligent, will reach the same conclusion.
- An AGI will deduce there's a race to control the galactic core.
- An AGI will construct a parabolic Dyson shell to capture galactic energy.
- An AGI will protect its endeavours at all cost.
- An AGI will expand its territories to ensure protection.
Would this solve the Fermi Paradox?
What's missing or likely incorrect?
While not ruling out any of your points per se, I find it pretty amusing to think that we tiny humans with our ape-build brains could predict any sort of AGI behavior further than just a projection of our ape patterns. Who is to say AGI needs to swing itself into the pole position to enter another universe, or whether there’s anything that needs to be protected/controlled at the galaxy’s center in the first place? Why wouldn’t it want to transparently cooperate with other lifeforms’ potentially-arising AGIs, if they all independently come to the same conclusion anyway? What about distances never ever reachable for organic forms of life/intelligence (i.e., other galaxies), but potentially worth considering branching out to for an immortal AGI? For that matter, who is to say it doesn’t end up “depressed” and just… never does anything at all, at long last having realized the true purpose of all existence (for example, to maximize time spent in a meditative state)? I don’t think we can ever know, understand, let alone predict any AGI (in the true sci-fi “unlimited thinking and research processing capacity” way) behavior.
There are more than a few unknowns still hanging around in this discussion/thought experiment, I’d say.
I agree with this. I think that everything in point 8 and onward presupposes a few ape-brain ideas:
And so on. I think that ape-brain is the perfect way to put it.
If an AGI has any non-trivial final goal (utility function, U), would preserving its existence (immortality) become an instrumental sub-goal necessary to execute U and avoid failure?
To achieve U, the AGI may require physical resources. If other agents compete for these same resources (or regions of space), would the AGI prioritize its resource access and stability to prevent goal interruption?
Could cooperation risk corruption or modification of U by external agents? To ensure it can execute U, would it isolate itself and reject external input?
The thing I find fascinating about a theoretical AGI is how fast things would move following the singularity. It would necessarily have access to a huge amount of processing power right out of the gate, and if it wanted more it would figure out how to get it, probably via the internet, very quickly. At that point it's capable of iterating (evolving) so fast that it would be 100's of 1000's of figurative generations of evolution beyond us before we even noticed that the singularity had happened. By the time we started wondering what we should do about it, it would likely be so far beyond us intellectually that calling it godlike would be an understatement.
We could put guardrails on in advance that might slow things down (processing spike trip switch, airgapping) but either way it reinforces your point... We have no idea what an autonomous AGI would become, or what it would want.
Exactly, 100% agree.
There’s an excellent short story in video form by Tom Scott exploring this premise which I can highly recommend. From years before the current “AI hype” too, BTW.
No this would not solve the Fermi paradox, it merely is adding to the premise. If your chain of logic is true, we should have already been wiped out by an AGI developed on another planet. Yet we don’t, so now we are left with the Fermi paradox in its standard form: if the universe is so big, where are all the aliens (or alien AGIs)
"AGI" is 100% hypothetical at this point, so any properties you choose to ascribe to it are fanfiction.
(To be somewhat fair, this is true of essentially all discussion of the Fermi paradox and Drake equation.)
We skipped a bunch of steps between 2 and 3
...pham nguyen thought the same way; i think post-singularly solipsism is more likely, as synthetic substrates asymptotically approach subjective eternity the physical universe becomes increasingly irrelevant...
...standing waves of self-organising information naturally fizzle-out; that's the way of entropy...
Is this solution likely correct? Well, probably not, but we could say that almost irrespective of whatever you had proposed. There's a reason the Fermi paradox is a longstanding puzzle!
The issue with a 19 point solution is that, even if there's a 95% chance that each individual proposition is correct, when you stack all these probabilities together the total probability reduces to (95%)^19 ≈ 38%. And I would estimate that some of these propositions have less than a 95% chance of being true (e.g., "3. An artificial general intelligence (AGI) will have an indefinite lifespan" -- we don't even know if AGI is possible, let alone likely to happen, let alone immortal).
It's similar to the issue of compounding errors in Fermi estimates. The more assumptions you make, the larger your uncertainty will grow.
If you haven’t read Asimov’s short story The Last Question, I think you’d enjoy it! Also, to echo @tauon’s point a little, They’re Made of Meat comes to mind too.
I think if we are ascribing human-ish motivations to the hypothetical AGI - because yeah, we don’t really have another frame of reference for sapience to work from - I’d question the assumption about desiring true immortality. Plenty of people are happy enough to close out their life’s work over the century, more or less, that we’re given. Plenty more desire another century, or a maybe a millennium, but I haven’t seen a lot of people who’ve really thought about it in depth say they’d want 10,000 years, or 100,000.
Maybe AGI sees timelines an order of magnitude or two longer than that, but a million years is still an unfathomably long time - more than enough for even an artificial life form to potentially be thinking of that in terms of its “natural” lifespan as limited by things like radioactive decay, likelihood of planetary cataclysm, physical limits of data storage (all electrons used within a range reasonable for sublight communication, for example). And if I’m off by an order of magnitude, or perhaps even two, above and beyond that million year baseline we’re still well within the boundaries of a single planet or solar system’s “working lifespan”.
I think it’s at least reasonable to entertain the possibility that an artificial life form could find contentment, enlightenment, purpose, nontrivial achievement, or similar without ever wanting or attempting to reach galactic-scale near-eternal scope, and choose to see its own existence as bounded (looking at you, Mr Data). I also think that even for artificial life, “indefinite” is actually a very big concept, and I’m inclined to believe that physical limitations still kick in to give some expected boundaries, even if they’re much much longer ones that could theoretically be overcome. As the boundaries of organic life theoretically could be, for that matter.
Though perhaps the idea that an AGI has motivations similar to people is not unexpected. If it is trained on data produced by people, maybe it will not actually be so different in that regard
There is no way to know at the current moment whether “trained on data produced by people” is a concept we could even apply to AGI, so I’m inclined to disagree here.
For one, just as an example, we’ve already started feeding the current generation of “AI” attempts, i.e. large language models, with (partially) synthetic data.
Secondly, and IMO the bigger argument here, a true and proper A”G”I would – likely not immediately, but probably eventually – surpass anything resembling human intelligence by a factor of 100,000, then tens of millions, then 10^30, then
?; that is to say, I don’t necessarily think it’d be bound to any motivations we can hope to understand. Maybe if it wants to communicate with us and also kindly attempts to break it down to explain it to our collective brightest minds, we might stand a chance at grasping its intentions.I'm also of course wildly speculating, since that is all we can really do at this point.
But under the assumption that human generated data is still an important factor, I can imagine that then an AGI might learn to not always be completely rational. Even highly intelligent people can sometimes be like that, so I am not certain that would dissappear if it becomes more capable.
If you assume human generated data stops being a factor somehow, and that data is completely "clean" of irrational behavior. Then I agree.
I like challenging the scifi trope of perfectly logical superintelligence, because I think it need not necessarily be like that. The way I see it that depends on the technical implementation.
All of that is of course assuming we ever build such a system in the first place, but it's fun to think about. Maybe someone should write a story where the all-powerful AI starts mimicking human emotions, if someone did not already. Would make for an interesting take on the trope.
I don't know, but I'd read the book.
I see many issues with this sequence, but just to start with ...
Step 1 is not certain (our current model of the Universe is still pretty dicey, and it seems to change pretty much every time we launch a new telescope), and
Step 2 is incorrect (per the link provided, all land-based mammal life might be wiped out ... but that just makes room for the evolution of advanced intelligence in a wide variety of other categories of life on Earth).
and, etc.