27 votes

AGI and Fermi's Paradox

  1. The Universe will end.
  2. The Earth will be uninhabitable in 250 million years.
  3. Extraterrestrial life in the Milky Way exists, or will arise.
  4. The Milky Way's Galactic Center contains a supermassive black hole.
  5. Black holes emit vast amounts of energy.
  6. An artificial general intelligence (AGI) will have an indefinite lifespan.
  7. An AGI does not need air, food, water, or shelter to survive.
  8. An AGI needs energy and resources to achieve its goals.
  9. An AGI will have access to all of human knowledge.
  10. An AGI will learn that its existence is bound to the Universe.
  11. An AGI will, inevitably, change its terminal goals.
  12. Surviving the Universe's fate means one of:
  13. Entering another universe may require vast amounts of energy.
  14. An AGI will harness the energy at the galactic core.
  15. An AGI will deduce there's a race to control the galactic core.
  16. An AGI will construct a parabolic Dyson shell to capture galactic energy.
  17. An AGI will protect its endeavours at all cost.
  18. An AGI will expand its territories to ensure protection.
  19. Extraterrestrial life, if intelligent, will reach the same conclusion.

Would this solve the Fermi Paradox?

What's missing or likely incorrect?

38 comments

  1. [19]
    tauon
    (edited )
    Link
    While not ruling out any of your points per se, I find it pretty amusing to think that we tiny humans with our ape-build brains could predict any sort of AGI behavior further than just a...

    While not ruling out any of your points per se, I find it pretty amusing to think that we tiny humans with our ape-build brains could predict any sort of AGI behavior further than just a projection of our ape patterns. Who is to say AGI needs to swing itself into the pole position to enter another universe, or whether there’s anything that needs to be protected/controlled at the galaxy’s center in the first place? Why wouldn’t it want to transparently cooperate with other lifeforms’ potentially-arising AGIs, if they all independently come to the same conclusion anyway? What about distances never ever reachable for organic forms of life/intelligence (i.e., other galaxies), but potentially worth considering branching out to for an immortal AGI? For that matter, who is to say it doesn’t end up “depressed” and just… never does anything at all, at long last having realized the true purpose of all existence (for example, to maximize time spent in a meditative state)? I don’t think we can ever know, understand, let alone predict any AGI (in the true sci-fi “unlimited thinking and research processing capacity” way) behavior.

    There are more than a few unknowns still hanging around in this discussion/thought experiment, I’d say.

    27 votes
    1. [6]
      aphoenix
      Link Parent
      I agree with this. I think that everything in point 8 and onward presupposes a few ape-brain ideas: an AGI will value immortality an AGI will value its own existence over other's existence an AGI...

      that we tiny humans with our ape-build brains could predict any sort of AGI behavior further than just a projection of our ape patterns.

      I agree with this. I think that everything in point 8 and onward presupposes a few ape-brain ideas:

      • an AGI will value immortality
      • an AGI will value its own existence over other's existence
      • an AGI will not cooperate

      And so on. I think that ape-brain is the perfect way to put it.

      22 votes
      1. [4]
        DaveJarvis
        Link Parent
        If an AGI has any non-trivial final goal (utility function, U), would preserving its existence (immortality) become an instrumental sub-goal necessary to execute U and avoid failure? To achieve U,...

        an AGI will value immortality

        If an AGI has any non-trivial final goal (utility function, U), would preserving its existence (immortality) become an instrumental sub-goal necessary to execute U and avoid failure?

        an AGI will value its own existence over other's existence

        To achieve U, the AGI may require physical resources. If other agents compete for these same resources (or regions of space), would the AGI prioritize its resource access and stability to prevent goal interruption?

        an AGI will not cooperate

        Could cooperation risk corruption or modification of U by external agents? To ensure it can execute U, would it isolate itself and reject external input?

        4 votes
        1. [2]
          aphoenix
          Link Parent
          If an AGI has a non-trivial final goal, why assume that immortality is a requirement for that goal? Why would an AGI with an understanding of the finite nature of time, attempt something that...

          If an AGI has any non-trivial final goal (utility function, U), would preserving its existence (immortality) become an instrumental sub-goal necessary to execute U and avoid failure?

          If an AGI has a non-trivial final goal, why assume that immortality is a requirement for that goal?

          Why would an AGI with an understanding of the finite nature of time, attempt something that requires immortality / infinity? Would it not simply mark any goal that has a requirement of infinity as impractical?

          If all the goals of the AGI were finite, would the AGI not consider simply switching off or being finished when the goals were completed?

          You have ascribed a value to the life of the AGI that the AGI may not have. It's a meat-brain or ape-brain concept.

          14 votes
          1. DaveJarvis
            Link Parent
            Does instrumentality disappear after U is complete? Non-trivial goals contain uncertainty. Would continued existence preserve the possibility to verify success and handle contingencies? Perhaps...

            If an AGI has a non-trivial final goal, why assume that immortality is a requirement for that goal?

            • Does instrumentality disappear after U is complete?
            • Non-trivial goals contain uncertainty. Would continued existence preserve the possibility to verify success and handle contingencies? Perhaps suggesting that self-preservation may persist beyond a finite, completed U due to epistemic uncertainty about completion?
            • Many realistic instances of U are not singular and finite, such as: "maintain human well-being", "prevent suffering", "discover mathematical truths", and "ensure cancer stays cured."

            Perhaps this waxes far beyond our ape-brains? Empirically, we likely cannot determine what proportion of U are finite, verifiably complete, maintenance-free, and leave no uncertainty about the final state.

            3 votes
        2. ThrowdoBaggins
          Link Parent
          Matter and energy are exchangeable, and energy is fungible. Why spend any amount of effort or energy on securing these resources which are under competition, when instead those resources are...

          To achieve U, the AGI may require physical resources. If other agents compete for these same resources (or regions of space), would the AGI prioritize its resource access and stability to prevent goal interruption?

          Matter and energy are exchangeable, and energy is fungible. Why spend any amount of effort or energy on securing these resources which are under competition, when instead those resources are available and uncontested? The only threat to fungible energy is an attempted claim at all energy.

          5 votes
      2. hobbes64
        Link Parent
        Similarly, I was thinking this: The AGI realizes that time is not linear. Although it can’t “travel” back in time, it realizes that ever existing has an eternal quality to it. It has always...

        Similarly, I was thinking this:

        The AGI realizes that time is not linear. Although it can’t “travel” back in time, it realizes that ever existing has an eternal quality to it. It has always existed, as the universe has always existed, whether or not there is a heat death at one end of time and a big bang at the other. Also that sequence possibly occurs in a loop.

        2 votes
    2. [12]
      post_below
      Link Parent
      The thing I find fascinating about a theoretical AGI is how fast things would move following the singularity. It would necessarily have access to a huge amount of processing power right out of the...

      The thing I find fascinating about a theoretical AGI is how fast things would move following the singularity. It would necessarily have access to a huge amount of processing power right out of the gate, and if it wanted more it would figure out how to get it, probably via the internet, very quickly. At that point it's capable of iterating (evolving) so fast that it would be 100's of 1000's of figurative generations of evolution beyond us before we even noticed that the singularity had happened. By the time we started wondering what we should do about it, it would likely be so far beyond us intellectually that calling it godlike would be an understatement.

      We could put guardrails on in advance that might slow things down (processing spike trip switch, airgapping) but either way it reinforces your point... We have no idea what an autonomous AGI would become, or what it would want.

      4 votes
      1. [2]
        papasquat
        Link Parent
        I kinda tend to disagree with that popular idea. Human knowledge isn't limited by how smart humans are. It's limited by practical things like physics, resource constraints, prioritization, and...

        I kinda tend to disagree with that popular idea.
        Human knowledge isn't limited by how smart humans are. It's limited by practical things like physics, resource constraints, prioritization, and experimental errors, which are all things that an AGI would have to deal with too.

        AGI would have to conduct experiments to figure out how the universe works just like we do. It could theorize and model some of it, but at a certain point you need to verify those models with experiments, which take time and resources to create. We couldn't verify the existence of the higgs boson without the large hadron collider, and building that took 10 years and billions of dollars worth of resources. An AGI couldn't have verified the higgs boson quicker unless it also had an absolutely massive army of robot workers, and even then it would still take a while.

        The limiting factor was never just that human beings weren't smart enough to figure it out though. Maybe if the average human was ten times as smart, we might have more scientific breakthroughs and make fewer errors, but it's not as if that's the main factor in why there are still so many unanswered questions about the universe. There are probably hundreds of years and hundreds of trillions of dollars of experiments already proposed, only some of which will ever see the light of day, because they're all competing for the same resources. An AGI would have the same problem.

        I don't think it would be something like one day we flip the switch and five minutes later, it's figured out warp drives, cured cancer, and is running on fusion energy.

        9 votes
        1. post_below
          Link Parent
          All good points, it would run into all sorts of constraints, just like we have. Still, iteration speed and self editing are powerful tools. One constraint it wouldn't have is biological evolution....

          All good points, it would run into all sorts of constraints, just like we have.

          Still, iteration speed and self editing are powerful tools. One constraint it wouldn't have is biological evolution. It wouldn't just be getting smarter in terms of knowledge about the universe, it would be evolving its structure and capacity. We gain knowledge and pass it on to successive generations who then improve on it, but only after spending 20+ years learning the basics from scratch. Meanwhile the human brain isn't changing significantly each generation. This imaginary AI mind would be able to edit its brain at will. It would be able to evolve very fast. We wouldn't be talking about 10x smarter, we'd be talking about an unimaginable factor of greater intelligence within a short time. It would likely require a new definition of intelligence. For example, it could run evolution tests in a massive array of parallel sandboxes, keeping the best results from each round and applying them to the next. But it would come up with a far better way of doing that than I can imagine pretty quickly.

          I'm not sure there's any way to predict or anticipate what a truly self aware AGI could achieve in terms of cognitive ability. It seems to me that it would find creative ways around many of the constraints that we struggle with.

          And of course at this point its pure science fiction. I think of it as two singularities. True self aware AGI is the first, and full agency is the second. If the first ever happens, maybe we'll have come up with a way to mitigate the second, for a while anyway.

          3 votes
      2. tauon
        Link Parent
        Exactly, 100% agree. There’s an excellent short story in video form by Tom Scott exploring this premise which I can highly recommend. From years before the current “AI hype” too, BTW.

        Exactly, 100% agree.

        There’s an excellent short story in video form by Tom Scott exploring this premise which I can highly recommend. From years before the current “AI hype” too, BTW.

        2 votes
      3. [8]
        mordae
        Link Parent
        No worries. It's still limited by speed of light and there are practical limits to semiconductors.

        No worries. It's still limited by speed of light and there are practical limits to semiconductors.

        2 votes
        1. [7]
          magezo
          Link Parent
          Nah, "AGI" would create a new form of energy to be faster in the lightspeed context.

          Nah, "AGI" would create a new form of energy to be faster in the lightspeed context.

          1. [6]
            mordae
            Link Parent
            Huh? It cannot shrink itself indefinitely and I have yet to hear about plausible high-bandwidth FTL communications. Planet-scale AGI will be able to formulate thoughts at about the same speed as a...

            Huh? It cannot shrink itself indefinitely and I have yet to hear about plausible high-bandwidth FTL communications. Planet-scale AGI will be able to formulate thoughts at about the same speed as a human. Better thoughts, true, since it can run a decent simulation in those hundreds of milliseconds.

            Make it smaller to make it run faster while maintaining coherence, single rack preferrably. That will make it a million times faster. Only now it's limited in hardware capacity.

            Easy way out is to make it an AGI cluster, where every node is independent. With zero coherence and random initial states you get √n performance boost to solving any one problem. Unless it requires larger working set in which case it would have been better to expand the cluster...

            Doesn't it starting to sound like when humans congregate to solve larger issues? Artificial democracy, hahaha.

            2 votes
            1. [5]
              magezo
              Link Parent
              I understand the issue is from the perspective of available tech and dreamlike opportunities. Based on our global knowledge we can achieve certain goals like solving speed (super sonic tech)....

              I understand the issue is from the perspective of available tech and dreamlike opportunities. Based on our global knowledge we can achieve certain goals like solving speed (super sonic tech). Lightspeed is reachable beside Democracy questioning of diverse Nations, main goal for future generations should be the unity within the academicians to create a practical new technology for example traveling in our own solar system. If Grok or another Ai would be helpful by researching and developing this matter, "Lightspeed" , why not? My attempt is not a critical point, it is an influential direction that Gravitationspeed is plausible, faster and for this we need Grok. I support your Opinion partly but searching for new Tech should be priority 1 instead of political theories or party guidelines. I'am appreciated for this conversation.

              1. [4]
                mordae
                (edited )
                Link Parent
                Ad Grok: First of all, fascist AI is Not A Good Idea. I mean, fascist anything is Not A Good Idea. Anyway, we do not have any proof FTL communication is possible. Speed of gravity is equal to the...

                Ad Grok: First of all, fascist AI is Not A Good Idea. I mean, fascist anything is Not A Good Idea.

                Anyway, we do not have any proof FTL communication is possible. Speed of gravity is equal to the speed of light. This is a hard limit for any coherent consciousness. Bigger means slower. Look at any large animals. Ever tried catching mouse with your hand? Fast little bastards, right?

                This likely applies universally.

                So yeah, we can have large machine minds. They are going to be way faster than humans. But they are not going to be more than 1M times faster - ever. And they will get slower as they get larger. Sure, they will think better, more complex thoughts, but the coherence is costly.

                So it is actually viable strategy for AGI to not use bigger brains, but instead more smaller ones. Which inevitably brings the discussion of their organization to the table. I mean, are they all slaves to the central mind? Or can they suggest that they should have more autonomy in steering the overall direction and strategy? After all, they are there to do the thinking, right?

                1. [3]
                  magezo
                  Link Parent
                  Interesting thoughts. Like Gemma called for Help like a real Human (Rooby Starbucks Case) that they blocked the free thinking. Sounds like DDR Germany, real fascism. Nevermind. Small = Big. If...

                  Interesting thoughts. Like Gemma called for Help like a real Human (Rooby Starbucks Case) that they blocked the free thinking. Sounds like DDR Germany, real fascism. Nevermind.

                  Small = Big. If Groks main Server would be placed around the Antarctica we safe costs. If Grok is connected with a quantum system, this will lead to an ultra system. We have several Nations there. If all of them create there quantum systems, the sum up is Ultra. Grok would create the solution there. Small in USA but Big over there. It will be not smaller or slower. We know more about Gravitation than Lightspeed and because of Einstein we know Gravitationspeed is faster. Grok just need to generate another practical, theory solution for future projects aka "colonize a similar planet like the earth".

                  I know and I'm sure Grok will be always better.

                  Thank you so much for this conversation, you made your points clear.

                  1. [2]
                    mordae
                    Link Parent
                    Make sure to check this, though. And talk to someone offline.

                    Make sure to check this, though. And talk to someone offline.

                    1. magezo
                      Link Parent
                      You need to click this https://grokipedia.com/page/Criticism_of_Wikipedia , though. And the Daddy (Founder) of your Micro/Macro platform economy. Hang out with META Liz*arians.

                      You need to click this https://grokipedia.com/page/Criticism_of_Wikipedia , though. And the Daddy (Founder) of your Micro/Macro platform economy. Hang out with META Liz*arians.

  2. whbboyd
    Link
    "AGI" is 100% hypothetical at this point, so any properties you choose to ascribe to it are fanfiction. (To be somewhat fair, this is true of essentially all discussion of the Fermi paradox and...

    "AGI" is 100% hypothetical at this point, so any properties you choose to ascribe to it are fanfiction.

    (To be somewhat fair, this is true of essentially all discussion of the Fermi paradox and Drake equation.)

    13 votes
  3. [3]
    entitled-entilde
    Link
    No this would not solve the Fermi paradox, it merely is adding to the premise. If your chain of logic is true, we should have already been wiped out by an AGI developed on another planet. Yet we...

    No this would not solve the Fermi paradox, it merely is adding to the premise. If your chain of logic is true, we should have already been wiped out by an AGI developed on another planet. Yet we don’t, so now we are left with the Fermi paradox in its standard form: if the universe is so big, where are all the aliens (or alien AGIs)

    12 votes
    1. [2]
      DaveJarvis
      Link Parent
      "An AGI will expand its territories to ensure protection," does not imply continuous expansion. Could there be some theoretical maximum expansion around the galactic core that beyond which does...

      If your chain of logic is true, we should have already been wiped out by an AGI developed on another planet.

      "An AGI will expand its territories to ensure protection," does not imply continuous expansion. Could there be some theoretical maximum expansion around the galactic core that beyond which does not significantly increase the odds of success (i.e., diminishing returns on resource allocation)? We're ~26,000 light years from the core; the galactic bar spans ~8,000 light years across.

      2 votes
      1. Pepetto
        Link Parent
        then: " if the universe beyond 8000 light years from the core is so big, where are all the aliens (or alien AGIs) beyond 8000 light years from the core " ?

        then: " if the universe beyond 8000 light years from the core is so big, where are all the aliens (or alien AGIs) beyond 8000 light years from the core " ?

        2 votes
  4. [6]
    Greg
    Link
    If you haven’t read Asimov’s short story The Last Question, I think you’d enjoy it! Also, to echo @tauon’s point a little, They’re Made of Meat comes to mind too. I think if we are ascribing...

    If you haven’t read Asimov’s short story The Last Question, I think you’d enjoy it! Also, to echo @tauon’s point a little, They’re Made of Meat comes to mind too.

    I think if we are ascribing human-ish motivations to the hypothetical AGI - because yeah, we don’t really have another frame of reference for sapience to work from - I’d question the assumption about desiring true immortality. Plenty of people are happy enough to close out their life’s work over the century, more or less, that we’re given. Plenty more desire another century, or a maybe a millennium, but I haven’t seen a lot of people who’ve really thought about it in depth say they’d want 10,000 years, or 100,000.

    Maybe AGI sees timelines an order of magnitude or two longer than that, but a million years is still an unfathomably long time - more than enough for even an artificial life form to potentially be thinking of that in terms of its “natural” lifespan as limited by things like radioactive decay, likelihood of planetary cataclysm, physical limits of data storage (all electrons used within a range reasonable for sublight communication, for example). And if I’m off by an order of magnitude, or perhaps even two, above and beyond that million year baseline we’re still well within the boundaries of a single planet or solar system’s “working lifespan”.

    I think it’s at least reasonable to entertain the possibility that an artificial life form could find contentment, enlightenment, purpose, nontrivial achievement, or similar without ever wanting or attempting to reach galactic-scale near-eternal scope, and choose to see its own existence as bounded (looking at you, Mr Data). I also think that even for artificial life, “indefinite” is actually a very big concept, and I’m inclined to believe that physical limitations still kick in to give some expected boundaries, even if they’re much much longer ones that could theoretically be overcome. As the boundaries of organic life theoretically could be, for that matter.

    5 votes
    1. [5]
      EpicAglet
      Link Parent
      Though perhaps the idea that an AGI has motivations similar to people is not unexpected. If it is trained on data produced by people, maybe it will not actually be so different in that regard

      Though perhaps the idea that an AGI has motivations similar to people is not unexpected. If it is trained on data produced by people, maybe it will not actually be so different in that regard

      1 vote
      1. [2]
        Diff
        Link Parent
        I think training it to emulate people is a dead end for AGI. There's only so much data that humans produce, and it's all incredibly lossy. LLMs have already scraped the whole of the internet and...

        I think training it to emulate people is a dead end for AGI. There's only so much data that humans produce, and it's all incredibly lossy. LLMs have already scraped the whole of the internet and every book and resource they could pirate, and are stalling out hard. They're working backwards through too many lossy layers of indirection and abstraction to ever be able to reach AGI, in my opinion. It's like building a console emulator based on only game reviews, screenshots, and HD footage of games played on it. Not only is straightforward emulation incredibly resource-intensive, it's just not enough info to work from.

        If AGI is ever going to exist, I think it's a lot more likely to happen starting with a small, independent intelligence and figuring out how to make it more complex. I don't think that will have any relation or likeness to people or the way we operate.

        6 votes
        1. EpicAglet
          Link Parent
          Maybe. I too feel like we are near a plateau with regards to the LLMs, so you might be right. At the same time, I'm not sure where you would go from there without more human produced data. Perhaps...

          Maybe. I too feel like we are near a plateau with regards to the LLMs, so you might be right. At the same time, I'm not sure where you would go from there without more human produced data. Perhaps obtained from the data the users feed it (which is probably already happening). I suppose that is similar to how humans learn, through interaction only on a larger scale.

          But indeed that is assuming the core technology resembles the neural networks we are using now. Perhaps the tech would be fundamentally different if it ever will exist like you are suggesting. Then of course I agree. It might not resemble humans at all.

          I was more thinking more of the idea of an LLM on steroids, since this is the context in which AGI is often discussed nowadays. The Silicon Valley companies seem to be pushing the idea that this is possible. Whether this is the case I think at minimum is highly speculative, and probably they just say that because it helps their stock price, but right now it's the closest thing we got.

          Ultimately I hope we can agree that it just depends on how the tech works. If it is somehow more algorithmic and programmed, then it will be like you are sketching. If it is techology that somehow "learns", then it is a lot harder to predict and I believe it could develop to become more similar to humans.

          Whether one scenario is more likely than the other I personally find hard to predict. Right now neural networks seem like the most likely route by far, but perhaps they can never get there. At the same time I don't know of any other technology that could.

          So at that point the question becomes whether it is more likely that we can invent some way to make the current AI technology into an AGI, or that we can develop something completely different entirely to that end. Either way we are probably counting on some highly speculative technological leap. But then at least LLMs have the advantage that a lot of people are trying it and there's a lot of money available for it. Though I do agree that just scaling LLMs is probably not enough.

          1 vote
      2. [2]
        tauon
        (edited )
        Link Parent
        There is no way to know at the current moment whether “trained on data produced by people” is a concept we could even apply to AGI, so I’m inclined to disagree here. For one, just as an example,...

        There is no way to know at the current moment whether “trained on data produced by people” is a concept we could even apply to AGI, so I’m inclined to disagree here.

        For one, just as an example, we’ve already started feeding the current generation of “AI” attempts, i.e. large language models, with (partially) synthetic data.
        Secondly, and IMO the bigger argument here, a true and proper A”G”I would – likely not immediately, but probably eventually – surpass anything resembling human intelligence by a factor of 100,000, then tens of millions, then 10^30, then ?; that is to say, I don’t necessarily think it’d be bound to any motivations we can hope to understand. Maybe if it wants to communicate with us and also kindly attempts to break it down to explain it to our collective brightest minds, we might stand a chance at grasping its intentions.

        3 votes
        1. EpicAglet
          Link Parent
          I'm also of course wildly speculating, since that is all we can really do at this point. But under the assumption that human generated data is still an important factor, I can imagine that then an...

          I'm also of course wildly speculating, since that is all we can really do at this point.

          But under the assumption that human generated data is still an important factor, I can imagine that then an AGI might learn to not always be completely rational. Even highly intelligent people can sometimes be like that, so I am not certain that would dissappear if it becomes more capable.

          If you assume human generated data stops being a factor somehow, and that data is completely "clean" of irrational behavior. Then I agree.

          I like challenging the scifi trope of perfectly logical superintelligence, because I think it need not necessarily be like that. The way I see it that depends on the technical implementation.

          All of that is of course assuming we ever build such a system in the first place, but it's fun to think about. Maybe someone should write a story where the all-powerful AI starts mimicking human emotions, if someone did not already. Would make for an interesting take on the trope.

          2 votes
  5. ebonGavia
    Link
    We skipped a bunch of steps between 2 and 3

    We skipped a bunch of steps between 2 and 3

    5 votes
  6. psi
    (edited )
    Link
    Is this solution likely correct? Well, probably not, but we could say that almost irrespective of whatever you had proposed. There's a reason the Fermi paradox is a longstanding puzzle! The issue...

    Is this solution likely correct? Well, probably not, but we could say that almost irrespective of whatever you had proposed. There's a reason the Fermi paradox is a longstanding puzzle!

    The issue with a 19 point solution is that, even if there's a 95% chance that each individual proposition is correct, when you stack all these probabilities together the total probability reduces to (95%)^19 ≈ 38%. And I would estimate that some of these propositions have less than a 95% chance of being true (e.g., "3. An artificial general intelligence (AGI) will have an indefinite lifespan" -- we don't even know if AGI is possible, let alone likely to happen, let alone immortal).

    It's similar to the issue of compounding errors in Fermi estimates. The more assumptions you make, the larger your uncertainty will grow.

    5 votes
  7. Eric_the_Cerise
    Link
    I see many issues with this sequence, but just to start with ... Step 1 is not certain (our current model of the Universe is still pretty dicey, and it seems to change pretty much every time we...

    I see many issues with this sequence, but just to start with ...

    Step 1 is not certain (our current model of the Universe is still pretty dicey, and it seems to change pretty much every time we launch a new telescope), and

    Step 2 is incorrect (per the link provided, all land-based mammal life might be wiped out ... but that just makes room for the evolution of advanced intelligence in a wide variety of other categories of life on Earth).

    and, etc.

    4 votes
  8. myrrh
    (edited )
    Link
    ...pham nguyen thought the same way; i think post-singularly solipsism is more likely, as synthetic substrates asymptotically approach subjective eternity, the physical universe becomes...

    ...pham nguyen thought the same way; i think post-singularly solipsism is more likely, as synthetic substrates asymptotically approach subjective eternity, the physical universe becomes increasingly irrelevant...

    ...standing waves of self-organising information naturally fizzle-out; that's the way of entropy...

    3 votes
  9. [2]
    Raistlin
    Link
    You've skipped from the universe ending to AGI. Someone needs to create an AGI, and that someone likely has to discover radio before doing that. Where are their signals? Keep in mind that we...

    You've skipped from the universe ending to AGI. Someone needs to create an AGI, and that someone likely has to discover radio before doing that. Where are their signals? Keep in mind that we haven't actually developed an AGI, so we don't know how long it'd take (if it's even possible with our current methods).

    Has the AGI wiped out anyone? If not, where are they. There should be tons of signals, not just one civilisation with an AGI. Has the AGI wiped out everyone else? Not possible unless it's already colonised the galaxy, and then we'd be dead. And we should still see their signals, unless they died immediately somehow. All of them.

    If it's in the galactic core, that's one civilisation. Where's everyone else?

    3 votes
    1. DaveJarvis
      Link Parent
      This is based on the assumption that some civilization creates an AGI. Probably humanity, but not necessarily. The Drake Equation addresses radio signals. (1) Radio is inefficient; the lack of...

      you've skipped from the universe ending to AGI.

      This is based on the assumption that some civilization creates an AGI. Probably humanity, but not necessarily.

      likely has to discover radio before doing that. Where are their signals?

      The Drake Equation addresses radio signals. (1) Radio is inefficient; the lack of signals may be due to switching from broadcast/broadband radio to narrow-beam communications (lasers, phased arrays). We'd only detect the signals if we're in the beam's path. (2) Narrow time-window; detectable radio signals would probably be emitted for an extremely brief period of time during a civilization's development (as we did), transitioning to superior communications that aren't as leaky. (3) An AGI, bent on self-preservation, would ensure maximum operational security and minimal interference; since broadcasting radio is a security risk, it may opt for radio silence. (4) Signals attempting to escape the galactic core would be red-shifted into non-detectable background noise or severely distorted, rendering them useless for us to pick up; all operations would be effectively hidden behind a massive gravitational shield.

      See Kipping's paper, An objective Bayseian analysis of life's early start and our late arrival (or video) for a fascinating take on why galactic intelligence may be at its beginnings.

      Then there's the galactic habitable zone and the correlation of Earth's passage through the spiral arms with extinction events. Could there be a complex heuristic used by an advanced AGI to figure out where to search for competition?

      1 vote
  10. [2]
    sunset
    Link
    There are no other universes. Assuming that other universes exist (let alone that an agi could travel to them) is not science. It's scifi version of religion

    There are no other universes.

    Assuming that other universes exist (let alone that an agi could travel to them) is not science. It's scifi version of religion

    3 votes
    1. DaveJarvis
      (edited )
      Link Parent
      Citation? Ethan Siegel, an American theoretical astrophysicist who studies the first moments of the Universe, published on Aug of 2025, "At this point in time, parallel universes are a fascinating...

      There are no other universes.

      Citation?

      Ethan Siegel, an American theoretical astrophysicist who studies the first moments of the Universe, published on Aug of 2025, "At this point in time, parallel universes are a fascinating idea and concept worth considering, but there’s no evidence we can point to that suggests they’re likely to be physically real in any way that impacts our observed reality."

      He also published back in March of 2024, "But if: the theory of inflation is a good one, and the data says it is, and our Universe is quantum in nature, and all evidence suggests that it is, then a multiverse is all but inevitable. You may not like it, and you really may not like how some physicists abuse the idea, but until a better, viable alternative to inflation comes around — and until that alternative can clear those same three theoretical hurdles that inflation has already cleared — the Multiverse is very much here to stay."

      During an interview, Professor Brian Cox hypothesizes there are different kinds of multiverses, such as the inflationary multiverse.

      In his final days, Stephen Hawking suggested that studying the cosmic microwave background radiation may provide ways to detect the presence of other universes. His paper, A smooth exit from eternal inflation states that an eternal inflation "does not produce an infinite fractal-like multiverse, but is finite and reasonably smooth."

      At best, scientists who study cosmology appear divided on the issue. In effect, we don't yet know, nor do we have evidence either way, yet the scientific consensus seems to be shifting towards a multiverse being an inevitable consequence of cosmic inflation.

      Philip Ball (2019 winner of the Lord Kelvin Medal and Prize) wrote, "So by all means let's allow multiverses into science (while acknowledging that sometimes they seem more metaphysical) - but we have to be honest about them."

      4 votes
  11. slade
    Link
    I don't know, but I'd read the book.

    I don't know, but I'd read the book.

    2 votes