11 votes

George Hotz wants to jailbreak the simulation we live in (SXSW 2019)

16 comments

  1. [9]
    balooga
    Link
    Simulation theory is fun to speculate about! But I don't really see a point to it beyond that. What is George's endgame here? He keeps talking about how he wants to escape the simulation and go...
    • Exemplary

    Simulation theory is fun to speculate about! But I don't really see a point to it beyond that. What is George's endgame here? He keeps talking about how he wants to escape the simulation and go "up there."

    The first massive assumption he's making is that the hardware driving our simulation has anything in common with the kind of computers we're familiar with. Our technology works because of our reality's laws of physics, which (it stands to reason) could be assigned arbitrarily by the simulator. If there's a higher-order "real world" it seems unlikely the rules would be the same out there as they are in here. Ergo, I'd wager zero hacking or CS principles are useful at all. For all we know the simulation is running in a universe with fundamentally different mathematics than ours. One of the great human limitations is imagining the truly alien.

    The second assumption is that it's possible for a simulated being to exit the simulation. The closest description of this George offered was Mario breaking out of his video game, moving his consciousness onto an AWS instance, and hacking the Pentagon or something. What's so fulfilling about that? Mario is still a piece of software, confined to computer systems. Without the environment he was designed to live in, he no longer has the senses he once used to perceive and interpret that environment. He has no physicality. That doesn't seem like much of an escape to me.

    Side note: One good outcome for that example is that Mario may have an opportunity to interact with the world outside of his simulation. However, I'd argue that if that were actually a good thing, Nintendo would've just built that capability into the game in the first place. If Mario ends up somewhere he's not supposed to be (like, say, the Pentagon) he's gonna be swiftly deactivated. If you take simulation theory seriously, you have to be mindful that we are all potentially one BOFH away from sudo shutdown now.

    Even if there was a custom-made robot body out there just waiting for Mario to transfer himself into, then what? Even George was quick to concede the likelihood of recursively nested simulations. I can't help but think of Cobb at the end of Inception or Barclay at the end of the Star Trek: The Next Generation episode "Ship in a Bottle." How will you ever know if you're at the top level? You can't. Can you be content to exist at a lower level? You must, or you'll go crazy.

    At a certain point you have to come to grips with the fact that trying to escape is a meaningless exercise. This is the reality we exist in, and it's the one we're ideally suited for. It's where we belong. I'd rather make meaning in here, where meaning can be had, than constantly be grasping at something else that's always out of reach.

    One final thought: As a theist and a programmer, I do appreciate the increasing discussion of simulation theory. Its recontextualization of religion has provided new vocabulary for some very old ideas, and I think it's a good counterpoint to purely materialist philosophies.

    17 votes
    1. [8]
      unknown user
      Link Parent
      I disagree. The human mind is very limited as it is now. If we're simulated yet are capable of escaping, I'm willing to bet we'd be able to adjust to whatever environment the upper level holds. If...

      At a certain point you have to come to grips with the fact that trying to escape is a meaningless exercise.

      I disagree. The human mind is very limited as it is now. If we're simulated yet are capable of escaping, I'm willing to bet we'd be able to adjust to whatever environment the upper level holds. If we've so mastered the simulation that we can escape it, surely we have the capacity to overclock or modify our mental circuits in a way that holds up "up there".

      In the end, it's always been a human endeavor to overcome. We've built things that allowed us to get food with no problem, communicate with almost anyone in moments, and treat our sickness to an admirable degree. We can traverse land, water, and space with ease. We, as a collective, have always wanted more, better, and stronger.

      Escaping the confines of our reality would be merely an extension of this inventive instinct. So far as our limited view can stretch, what more is there to mastering nature – the cosmos, the Universe, reality – than escaping its bounds entirely?

      Not that I support the simulation theory in any way. It's "the hippy shit", if "the hippy shit" was exactly what some of our parents thought it was, and it only proliferates because some of the more intellectually-gifted people are talking about it as if there were a possibility of it being real. We invent structures outside of our perceived reality because we want to contest its existential boringness. We've invented God, Allah, spirits, karma, and so many things because it's way more fun than the relatively-dull way things are.

      Now, enamored with the digitalization our lives are undergoing, we're inventing the new gods: aliens, ASI, and the entities running the simulation.

      I'm not contesting the behavior – it makes things more exciting – but I'd rather make the simulation than ponder whether I'm living in one. Better yet: change reality. Make my own little pocket where I can experiment with things to my heart's content. Maybe feed the data to other small pockets, so that they could use it, as well.

      2 votes
      1. [7]
        onyxleopard
        Link Parent
        Let’s back up. Can we make copies of ourselves in this universe? If we don’t even know how to do that, how can we possibly manifest ourselves in a universe that may have fundamentally different...

        Let’s back up. Can we make copies of ourselves in this universe?

        If we don’t even know how to do that, how can we possibly manifest ourselves in a universe that may have fundamentally different physical laws than ours? I think, at best, assuming we are in a simulation, is that we will be able to communicate with the universe above us. And, it’s possible whoever kicked off our simulation is long gone, or doesn’t understand our universe. Think about this: How would we detect if a video game NPC tried to communicate with us? When we run simulations, we have no expectation that anything in that simulation has the agency to get out. We run simulations for our purposes, and if we thought a simulation we were running became self-aware, my best guess is we’d just pull the plug.

        1 vote
        1. [6]
          unknown user
          Link Parent
          Depends on your definition of a copy. We can't currently map our brain and recreate it, and we can't currently clone our body perfectly, but the theory is getting there, and there have been...

          Can we make copies of ourselves in this universe?

          Depends on your definition of a copy. We can't currently map our brain and recreate it, and we can't currently clone our body perfectly, but the theory is getting there, and there have been experiments. After all, if we can be computed down to a set of cells, we can be replicated. We can't snap our fingers and see ourselves beside us, but it may be possible to do so in a more science-y way.

          how can we possibly manifest ourselves in a universe that may have fundamentally different physical laws that ours?

          Fuck if I know – but we'll try, nevertheless. If it's at all possible, the method is but a matter of time. We can probe, and experiment, and run simulations (of what would be the most ironic place to run sumalations of), and test otherwise.

          I'm not saying it's possible with any definitiveness, but if it is, we should try.

          Think about this: How would we detect if a video game NPC tried to communicate with us?

          That's assuming the NPC has any sort of capacity beyond their programming. Have we? It seems to me that you're ascribing fundamentally-different sorts of agency and ability to the two targets of your assumption – the NPC and the human – of which, I'm assuming, there's no reconciliation.

          and if we thought a simulation we were running became self-aware, my best guess is we’d just pull the plug.

          But it hasn't. The simulation isn't of us, the self-centered little rascals that we are: it's of the world around us, of which we are but a part. That would be if the Universe would somehow become self-aware, let alone sentient or sapient. That would be freaky beyond anything I can put into words. That would be beyond existentially-terrifying, and confusing, and unsettling.

          And – I dunno how 'bout you, but I'd talk to the NPC if they seemed sentient, or run the code that presents them with the data they could understand and respond with. (Not that I'm able to – I'm barely a programmer – but I'd seek out ways.)

          1 vote
          1. [5]
            onyxleopard
            Link Parent
            If we’re in a simulation, aren’t we NPCs? Or are you assuming that we’re PCs but not aware of it? Or that we’re PCs, but are somehow fully immersed in this simulation such that we have no...

            It seems to me that you're ascribing fundamentally-different sorts of agency and ability to the two targets of your assumption – the NPC and the human – of which, I'm assuming, there's no reconciliation.

            If we’re in a simulation, aren’t we NPCs? Or are you assuming that we’re PCs but not aware of it? Or that we’re PCs, but are somehow fully immersed in this simulation such that we have no perception or knowledge of the 'upper' universe?

            That would be if the Universe would somehow become self-aware

            If a Koopa Troopa from Super Mario World spontaneously became self-aware, what would be the difference between saying that the 'Koopa Troopa' became self-aware and that instance of Super Mario World became self-aware? I can’t say I see a distinction, and someone outside our universe looking at it wouldn’t see a distinction either.

            but I'd talk to the NPC if they seemed sentient

            You assume that it speaks your language. This is one of the same fallacies people make when they imagine intelligent extraterrestrial life. There’s no expectation that we could communicate.

            1 vote
            1. [4]
              unknown user
              Link Parent
              This is an extremely-narrow way of looking at it. You're making it seem like the only way a simulation would work is in the terms and semantic confines of a video game. I dunno what the fuck we'd...

              If we’re in a simulation, aren’t we NPCs?

              This is an extremely-narrow way of looking at it. You're making it seem like the only way a simulation would work is in the terms and semantic confines of a video game. I dunno what the fuck we'd be in a simulation, but we wouldn't be PCs or NPCs, because either is reflective of nothing but our own biases.

              If a Koopa Troopa from Super Mario World spontaneously became self-aware, what would be the difference between saying that the 'Koopa Troopa' became self-aware and that instance of Super Mario World became self-aware?

              Then I guess, human being that recognizes themselves in the mirror, that the Universe has become self-aware.

              You assume that it speaks your language.

              No, I don't. I explicitly expanded the definition of communicating with a hypothetical self-aware NPC in a hypothetical video game right after the very thing you'd quoted. Come on.

              There’s no expectation that we could communicate.

              That is incorrect. The AI that could support such a being was built by humans, within the known and tangible confines of both the human mind (of a person with a bunch of experience in CS and game AI, presumably) and the hardware/software combo. That it had suddenly developed self-awareness only says that the limits of the data that supported the AI haven't been nearly well-explored – not that it would, by some magical force, be so alien to us that we couldn't correspond with it.

              At worst, we can send electrical signals back and forth until we manage to hit a semantic structure the self-aware AI can operate within.

              1 vote
              1. [3]
                onyxleopard
                Link Parent
                I’m using the vocabulary of videogames because Hotz introduced it as an example/analogue. We can discuss this in terms of autonomous agents and heteronomous agents if you prefer. We can run...

                You're making it seem like the only way a simulation would work is in the terms and semantic confines of a video game.

                I’m using the vocabulary of videogames because Hotz introduced it as an example/analogue. We can discuss this in terms of autonomous agents and heteronomous agents if you prefer. We can run simulations in which we have no direct inputs into that simulation beyond the initial state (e.g., The Game of Life). But, in such simulations, anything resembling self-awareness that evolved inside that simulation would have to be considered as disconnected from the universe in which that simulation is running. I.e., you can’t simultaneously have universal rules, but also have some outside agent mucking about and changing the state at the same time. That would be inconsistent. Whereas, video games with player characters have a connection between the game state and our world, which is the player character (or whatever interface that has been programmed by the game designers).

                The AI that could support such a being was built by humans, within the known and tangible confines of both the human mind (of a person with a bunch of experience in CS and game AI, presumably) and the hardware/software combo.

                As I said, we can’t even copy ourselves yet. We are so, so far away from human created general AI. Anyone who is talking about AI today is talking about specific AI. The reason I’m talking about GAI spontaneously arising from an automaton is because that’s the only way I can imagine it happening given our current state of the art.

                At worst, we can send electrical signals back and forth until we manage to hit a semantic structure the self-aware AI can operate within.

                As a Linguist, I fundamentally disagree. We don’t even know how to formally define semantics of our own natural language. Something much more intelligent than us might be able to. But, unless we can make ourselves smarter, we’re pretty much stuck only talking to other humans (or any other agents we can teach natural language to, like other simians). We can certainly exchange information, but I don’t know that we’d be able to interpret any message we get back. (Unless the other end were so intelligent that they could learn our language and translate their message for us, which I do concede is possible.) Humans are so prone to overestimate our knowledge, though. We don’t know how our consciousness works. We can’t even prove we are actually autonomous. We don’t know how to copy ourselves within this universe. We are unfathomably far away from interacting with anything but that which is part of this universe that we know we don’t even fully understand.

                2 votes
                1. [2]
                  unknown user
                  Link Parent
                  Okay, that's fair enough. Is it because you'd consider that the simulation no longer abides by the same rules we do on the outside of it? We can, by the definition of ability and capacity. We...

                  I’m using the vocabulary of videogames because Hotz introduced it as an example/analogue.

                  Okay, that's fair enough.

                  But, in such simulations, anything resembling self-awareness that evolved inside that simulation would have to be considered as disconnected from the universe in which that simulation is running.

                  Is it because you'd consider that the simulation no longer abides by the same rules we do on the outside of it?

                  As I said, we can’t even copy ourselves yet.

                  We can, by the definition of ability and capacity. We simply lack the precision tools to follow through with the operation. I feel like this is important because we're talking about such outrageously-far-fetched ideas here that the capacity matters more than the near-future plausibility.

                  The reason I’m talking about GAI spontaneously arising from an automaton is because that’s the only way I can imagine it happening given our current state of the art.

                  And even that is a massive stretch.

                  We don’t even know how to formally define semantics of our own natural language. <...> We don’t know how our consciousness works. We can’t even prove we are actually autonomous. We don’t know how to copy ourselves within this universe.

                  As far as communicating with someone who doesn't speak our language (or even use the same channels) we do, none of this matters. We're talking about breaking through to something that may very well not be alike us. The only strategy here is to try: leave concise, unemotional messages, in as many different forms as we can imagine, to the best of our capacity until such time that we have broken through. Language, mathematics, chemical reactions, light-assisted simulation of electron movement, Morse code, actual direct electric signal, pictograms, schemes – anything. (And that's just what my shallow mind could come up with on the spot. Someone with the know-how can come up with much cleverer things.)

                  If we get anything back, study it relentlessly until data can be extracted. If we stop at any of those without reaching a conclusion, the conclusion will no longer be reachable. We're nowhere near stupid. Maybe what we would come up with would seem inferior to the simulation hosts, but if we're at the stage where they can see us beside themselves, I'm sure they'll notice that we're trying to communicate.

                  That's, like, the basics of science: keep fuckin' trying. I'm pretty sure that's what's written in one of the official Science manuals! So that pessimism you express is misplaced. If we're in the position that we know someone is out there, we can't afford to stop and consider whether we're doing the right thing. In this case, hubris may be our best source of energy: "How dare you try that?" – "Well, I fucking dare".

                  1. onyxleopard
                    Link Parent
                    No, because the rules of an automaton are explicitly defined. Running the simulation is literally an execution of those rules. An automaton is, essentially, its own universe and it doesn’t have...

                    Is it because you'd consider that the simulation no longer abides by the same rules we do on the outside of it?

                    No, because the rules of an automaton are explicitly defined. Running the simulation is literally an execution of those rules. An automaton is, essentially, its own universe and it doesn’t have any way to transmit information into it that was not present in the initial state and its rule set. (This is why some think our universe is an automaton—the rules seem to be totally consistent through time and space, even if we don’t have a complete definition of those rules yet.)

                    We can, by the definition of ability and capacity. We simply lack the precision tools to follow through with the operation. I feel like this is important because we're talking about such outrageously-far-fetched ideas here that the capacity matters more than the near-future plausibility.

                    I’m less concerned with plausibility than possibility. I’m not sure it’s possible. I.e., I don’t know what’s the correct resolution to the Ship of Theseus identity problem (or if there is a correct resolution). This is important, because, if there is no such thing as identity, then there is no point in 'escaping' the universe in which you began in because you wouldn’t be you anywhere else anyway.

                    As far as communicating with someone who doesn't speak our language (or even use the same channels) we do, none of this matters. We're talking about breaking through to something that may very well not be alike us. The only strategy here is to try: leave concise, unemotional messages, in as many different forms as we can imagine, to the best of our capacity until such time that we have broken through.

                    Trying things without understanding them is subscribing to a cargo cult belief system. What we should try to do is be scientific about it. You are making it out to be an issue of effort, when it may be fundamentally impossible.

                    That's, like, the basics of science: keep fuckin' trying.

                    No, the basics of the scientific method is to design falsifiable experiments. Throwing things at the proverbial dart board to see what sticks is not scientific. I’m not saying you shouldn’t try throwing things at the wall, but typically you only expect to solve easy problems that way. Real progress on hard problems is usually made by very careful experimentation. Basically, the point is, you shouldn’t expect to get lucky. You should design an experiment where you make your own luck.

                    1 vote
  2. [2]
    onyxleopard
    Link
    Hotz makes a fundamental mischaracterization of atheism that I think belies an intellectual puerility. If you don’t believe in god because someone told you god does not exist, or you’ve been told...

    Hotz makes a fundamental mischaracterization of atheism that I think belies an intellectual puerility. If you don’t believe in god because someone told you god does not exist, or you’ve been told that it’s a rational position to take, or you simply have ideological conflicts with religious doctrine or hypocritical religious persons (e.g., New Atheism), then you may be mislabeling yourself by saying you are an atheist. You may actually be an antitheist. I don’t think you can be atheist merely because you were raised atheist. Atheism requires a lack of belief in god, which is a position you have to arrive at epistemologically, not through socialization. I would classify Hotz’ ideas (and any notion of universal simulation) as a form of pantheism, which is itself a form of antitheism, but is distinct from atheism.

    Personally, I think the most rational form of antitheism is tea-pot atheism. This isn’t incompatible with Hotz’ ideas, but, if he were intellectually honest, and truly believed what he was 'preaching', I fail to see why he wouldn’t devote his life to studying physics. Clearly, if you want to understand the universe, playing around with human-created things like Playstations and exploits in human-created software is totally the wrong direction to go. He mentions he would like to start a 'church' that would be devoted to trying to break out of the universe. Is he really too dense to see that that project already exists? It’s called physics, and there are plenty of people already hard at work on it, and there have been for centuries. It takes a giant ego or a tiny, tiny intellect (or both) to think he’s the first person to think about a 'church of understanding nature'. In fact, he mentions several physical experiments that have been conducted by physicists, but then seems to dismiss them right away. This has to be intellectual dishonesty, or he simply doesn’t actually care about this at all. And obviously the cynical take is that he is just trying to put his personal brand out there and promote his company. And maybe that’s all this is. But, if he were really, truly trying to attract intelligent people to work on his 'project', he should be going to CERN, ANL, et. al.

    7 votes
    1. unknown user
      Link Parent
      If there were a "highlight" function like on Medium, that would be mine.

      He mentions he would like to start a 'church' that would be devoted to trying to break out of the universe. Is he really too dense to see that that project already exists? It’s called physics, and there are plenty of people already hard at work on it, and there have been for centuries.

      If there were a "highlight" function like on Medium, that would be mine.

      4 votes
  3. [4]
    escher
    Link
    I've been working on stronger AI for NPCs and my friends always ask me "but what if they escape the game!" and it's surprisingly difficult to get them to understand how that concept doesn't even...

    At a certain point you have to come to grips with the fact that trying to escape is a meaningless exercise.

    I've been working on stronger AI for NPCs and my friends always ask me "but what if they escape the game!" and it's surprisingly difficult to get them to understand how that concept doesn't even make any sense.

    4 votes
    1. [3]
      onyxleopard
      (edited )
      Link Parent
      Yeah, when he’s talking about Mario in Super Mario World as if Mario's an NPC that would have the agency to escape it just doesn’t work. Mario is the PC, and the people who figured out how to...

      and it's surprisingly difficult to get them to understand how that concept doesn't even make any sense.

      Yeah, when he’s talking about Mario in Super Mario World as if Mario's an NPC that would have the agency to escape it just doesn’t work. Mario is the PC, and the people who figured out how to execute arbitrary code in Super Mario World were only able to do it because they ran the software on an emulator where they could look at the emulator’s state.

      If we imagine a simulation that we create, where we can look at the complete state, and we can somehow figure out a way to put an agent in there that is capable of understanding it’s in a simulation, what is the most that it could do? It could have some internal belief state that it’s in a simulation. And maybe it could copy itself in some form. But, the system running that simulation is not the same as the simulation. It could maybe reprogram the system running its simulation. But it can’t manifest itself in our world—or maybe this gets into ship of Theseus issues. The Greeks had a word to describe the world inside a world: diegesis. The diegetic world is not the same as the world outside. It’s not made of the same stuff.

      I can speculate that, maybe, if you can find a Quine that also, as a side effect, is a simulation, so the 'upper worlds' are the exact same program as the one you’re in? But, if you escape the inner program just to get 'up to' another instance of the same program, what have you really accomplished? (And maybe that’s what nature is, just an infinite series of simulations of the same thing.)

      1. [2]
        Comment deleted by author
        Link Parent
        1. onyxleopard
          Link Parent
          Which is why I am saying it must be a Quine with side effects. Basically the infrastructure that executes the simulation plus the rules might be a Quine, but the part we inhabitants can observe...

          Which is why I am saying it must be a Quine with side effects. Basically the infrastructure that executes the simulation plus the rules might be a Quine, but the part we inhabitants can observe are just side effects. I.e., The Game of Life running inside itself. Or maybe your point is about information? And that the simulation of our universe in itself would have to be scaled down in either space or time? In that case, yes that makes sense.

      2. escher
        Link Parent
        The Quantum Thief book trilogy is the only story I've ever read that even comes close to making this work, and even then they never "escape" to a higher level like a game character leaving its game.

        But, the system running that simulation is not the same as the simulation. It could maybe reprogram the system running its simulation.

        The Quantum Thief book trilogy is the only story I've ever read that even comes close to making this work, and even then they never "escape" to a higher level like a game character leaving its game.

  4. dian
    Link
    I thought he was going to "crush" Mobileye? Did he put that aside for this?

    I thought he was going to "crush" Mobileye? Did he put that aside for this?