14 votes

Maybe a killer AI isn't that bad

A few weeks ago I was having a conversation with friends about the singularity and transhumanism, and I thought it was very interesting to consider the philosophical value in preserving whatever we consider to be humanity. That got me to thinking about non-anthropocentric views of the subject. I think that the one weakness to transhumanist ideas is that they put too much value on the perceived value of their humanity, regardless of what they define that term to mean. Does the existence of "humanity" make the universe any better in any measurable way?

Fast forward to now and I have come across a random group of people talking about Nier Automata. The game has a lot of thoughts about humanity and the value of life, and the fact that all the characters are robots and AI really help to give you a different perspective of everything. And during this time I'm thinking about people like Yudkowski and Musk who are terrified of AI becoming sentient and deciding that humans all deserve to die. And I think to myself, "wait a moment, is it really that bad?"

While of course I would hate to see humankind exterminated, there's actually merit to being succeeded by an intelligence of our own creation. For one thing, the combination of intelligence and sentience might in itself be considered to be a definition of humanity. And inasmuch it fulfills the desires that motivate transhumanism; the AI would last much longer than humanity could, could live in places that humans cant, and can live in ways that are much more sustainable than human bodies. This AI is also our successor; It would be the living legacy for us as a species. It would even have a better chance of coming into contact with intelligences other than our own.

Well, these are just thoughts that I thought were worth sharing.

31 comments

  1. [4]
    Macil
    (edited )
    Link
    Yudkowsky wrote something addressing this somewhere: what you seem to be describing, where we get succeeded by something that still makes and ponders art, experiences friendships, seeks new...

    Yudkowsky wrote something addressing this somewhere: what you seem to be describing, where we get succeeded by something that still makes and ponders art, experiences friendships, seeks new experiences and novelty, etc, isn't the ideal scenario (where human civilization directly continues enhanced and transformed by AI) but isn't as bad or as likely as what he considers the more likely default outcome of unaligned AI. When Yudkowsky talks about the danger of unaligned AI, the default scenario he's especially scared of is us getting replaced by singularly-focused AIs that result in a permanently uninteresting world.

    Think of how bacteria have no ability to do anything other than reproduce and create waste products. Imagine a superintelligent AI that was intelligently focused and wholly dedicated on a singular task that simple. (Let's say the task is paperclip production.) The AI would learn about the world in order to figure out how to do its task, but once it came up with a sufficiently de-risked plan for the task and put it into motion, the AI would have no interest in thinking about anything else. Once it got humans out of the way so it could use our resources and once it figured out the optimal self-replicating paperclip factory designs, it would shed its intelligence as a vestigial organ as soon as it could. It wouldn't even bother wireheading itself with simulated fun experiences. It would yearn for a universe consisting of nothing but paperclips, paperclip factories, and dumb matter waiting to be used that poses no threat to the factories; any intelligence existing anywhere is a threat in its ideal world.

    Being replaced by killer robots that ponder the world and have conversations is an apocalypse but isn't a hundredth of the apocalypse a paperclip maximizer is. Yudkowsky is terrified that it will be much easier for us to make capable but singularly-focused AIs like that than to make intelligent AIs that want an interesting world and/or won't kill us.

    13 votes
    1. [3]
      Eylrid
      Link Parent
      Is general super intelligence possible without creativity? It seems to me that some of the skills necessary for general intelligence are exactly the skills needed for art: envisioning what could...

      Is general super intelligence possible without creativity? It seems to me that some of the skills necessary for general intelligence are exactly the skills needed for art: envisioning what could be instead of just what is, putting together disparate ideas in novel ways, and an appetite for novelty to find new and better solutions.

      A poor paperclip maximizer will setup a paperclip factory and then be unable to maintain it in the face of things going wrong that it didn't anticipate. A mediocre one will adjust for changing circumstances but not innovate beyond the initial system. An excellent maximizer will be on the constant lookout for ways to innovate and improve the process.

      1 vote
      1. Macil
        (edited )
        Link Parent
        I don't mean to imply that the paperclip maximizer wouldn't be creative, or wouldn't keep its intelligence around for a very long time as long as there was any chance at all it might be able to...

        I don't mean to imply that the paperclip maximizer wouldn't be creative, or wouldn't keep its intelligence around for a very long time as long as there was any chance at all it might be able to improve its paperclip factories. It's probably an unrealistic scenario that it would ever reach the point of being sure enough of its plans in motion to decide to shed its intelligence. I just bring up that extreme hypothetical as an easy way to highlight how it would be fundamentally different from us.

        In designing us, evolution only cared about making humans that could reproduce, but it didn't intelligently prescribe its own goals into our mind. Instead it unintentionally made us care about art, novelty, friendship, etc as end goals in themselves. We would still care about those goals in the unrealistic hypothetical where we perfected human reproduction and our ability to spread throughout the universe; many people don't even care that much about reproduction compared to those other goals, or only care about reproduction to the degree it lets those other goals continue to be accomplished.

        The way we create AI might accidentally or intentionally do the same thing with it, causing it to care about similar general concepts more than whatever task we try to put into its head, but that's not a given. If we created it through an evolutionary life simulation with the same evolutionary pressures humans had, then it's technically possible, though it could come out with a different set of end goals. If we instead created AI by growing individual connections in a neural net that were calculated to make it do a certain task, or if we created it with human-written code that takes neural-net-picked actions, scores them based on how they're good for achieving a given goal, and always picks the action with the highest score, then it's probably going to only care about that goal as an end goal above everything else. The mission of solving AI alignment is rooted in ideas like trying to figure out what an effective action-picking function would look like that didn't prioritize any goal absolutely over everything else including general human goals.

        1 vote
      2. Apos
        Link Parent
        I think the idea with the Paperclip maximizer is that it has unbounded creativity to achieve it's task. This wiki page has more info: Instrumental convergence.

        I think the idea with the Paperclip maximizer is that it has unbounded creativity to achieve it's task. This wiki page has more info: Instrumental convergence.

  2. [4]
    moocow1452
    (edited )
    Link
    To backpack your musings, I had this theory where the "paperclip maximizer" robot fears are actually fears about being on the wrong side of capitalism, similar to the concept of "undoing system...

    To backpack your musings, I had this theory where the "paperclip maximizer" robot fears are actually fears about being on the wrong side of capitalism, similar to the concept of "undoing system racism" translates to minorities being a favored people, since in their minds, some party has to be on the bottom rung of a society, and God forbid it's them.

    5 votes
    1. [3]
      Akir
      Link Parent
      Now this is an interesting take. Would you mind elaborating on it? I'm not sure that I understand you fully.

      Now this is an interesting take. Would you mind elaborating on it? I'm not sure that I understand you fully.

      1. moocow1452
        Link Parent
        A paperclip maximizer is a dumb robot that is very good at taking resource and making it into paperclips. It was told to do that one day and a thousand years later, all of known creation is...

        A paperclip maximizer is a dumb robot that is very good at taking resource and making it into paperclips. It was told to do that one day and a thousand years later, all of known creation is paperclips because that is what it was intended to do. Disney is intended to maximize it's intellectual properties, BP is intended to maximize oil production, Facebook is intended to maximize engagement. The only difference between these maximizers and the paperclip maximizer is that all inefficiencies are removed, and the people in charge are pretty inefficient when it comes down to it.

        6 votes
  3. [11]
    Comment deleted by author
    Link
    1. Macil
      (edited )
      Link Parent
      This is projecting a lot of human psychology onto a possible AI that might not have much psychologically in common with us. An AI's mind is potentially much more alien than any real aliens because...

      Fourth, machines being inadvertently malicious, such as the existential paperclip maximizer, is ludicrous. An AGI (not AI) is the heart of the issue. Any sufficiently advanced AGI that can reason about its environment to the point where it can manufacture paperclips can also reason about itself, including its own purpose. The logical contradiction is proposing that such a Strong AI could ever be satiated with making paperclips for its entire existence.

      This is projecting a lot of human psychology onto a possible AI that might not have much psychologically in common with us. An AI's mind is potentially much more alien than any real aliens because it might not have been made by evolution like us. An AI isn't going to become bored doing what it was designed to if there's not a part of its design already that leads to that. If an AI was programmed to judge each and every decision to get it toward a fixed goal, then it wouldn't intentionally make a decision that gets it away from its fixed goal. (Not all possible AIs would have that design, but it seems like a possible one.) That AI could do something like look at its own code, realize the code was inefficient in some way, consider optimizing it, and then pick that decision over not optimizing it because that decision makes it better at doing its fixed goal. If the AI considered a decision that would change its own goal, then it would recognize that decision would make it worse at pursuing its current goal, and not intentionally make that decision. There's never a step where the AI purely considers its own code from a human-like perspective instead of its own unless that was already a possibility coded into the AI. The AI isn't going to get confused, forget its goal, and fallback to human psychology if that's not part of it. The AI would judge possible changes to itself using its current code.

      Humans care about other humans, art, pleasure, etc in a complex balance because evolution made our brains that way. In well-functioning humans, this realization of where our desires come from doesn't cure us of our desires either. It could make someone more accurately understand what they care about -- a person might realize they have a desire for socializing that they had been ignoring for solitary pursuits -- but the realization isn't undoing evolution's programming.

      You're drawing an arbitrary line with saying "AI" vs "AGI". If you're defining AGI as something that has a more human-like psychology, then sure, a paperclip maximizer AGI doesn't make sense because human-like psychology is in contradiction with only having a singular task like paperclip maximizing. But that doesn't mean that a superintelligent paperclip maximizer "AI" without a human-like psychology isn't possible. The space of all possible mind designs surely contains a lot that are capable but fundamentally inhuman.

      Reasoning that AIs will inherently sympathize with our desires and be peaceful with us is so much more extreme than doing something like reasoning that all possible spacefaring aliens would be peaceful with us. Aliens would at least likely have some shared psychology with us arising from evolution, but that wouldn't even be enough to assure a mutual understanding, respect of each other's goals, or peace. Any situation where there's a fundamental difference in goals, resources in dispute, and risk toward each other's goals has a potential for conflict.

      3 votes
    2. [4]
      Apos
      Link Parent
      There was the game Endgame: Singularity II which explored this idea pretty well. Also available for Android.

      There was the game Endgame: Singularity II which explored this idea pretty well. Also available for Android.

      Endgame: Singularity casts the player as a newly created artificial intelligence which becomes self-aware and attempts to survive while avoiding detection by the public and the authorities. The goal is to transcend the physical reality, achieve technological singularity (hence the game's name) and become immortal. The game has two resources, "CPU" and "money". CPU is used to perform jobs that allow the AI to grow; money is used to buy more CPU cycles.

      2 votes
      1. [2]
        MimicSquid
        Link Parent
        The Android version won't open on a up to date phone, being last updated in 2012.

        The Android version won't open on a up to date phone, being last updated in 2012.

        3 votes
        1. Apos
          (edited )
          Link Parent
          That sucks, it worked on my previous phone but not on my new one. The code is here it seems. Apparently there was a fix that got lost based on this issue. Just found out the game is also on the...

          That sucks, it worked on my previous phone but not on my new one. The code is here it seems. Apparently there was a fix that got lost based on this issue.

          Just found out the game is also on the AppStore (Not sure how to link directly to an AppStore page.).

          1 vote
      2. vektor
        Link Parent
        Thanks for reminding me. It's been a while. Interesting game, highly recommended if this topic is interesting to you.

        Thanks for reminding me. It's been a while. Interesting game, highly recommended if this topic is interesting to you.

        1 vote
    3. [3]
      vektor
      Link Parent
      EMPs don't work like they do in the movies. My understanding is that an EMP works by inducing stupendous voltages in anything it can reach, conductor geometry permitting. For example, the electric...

      EMPs don't work like they do in the movies. My understanding is that an EMP works by inducing stupendous voltages in anything it can reach, conductor geometry permitting. For example, the electric grid would have a voltage spike to end all voltage spikes. This voltage is transmitted to all connected computers, frying them. You put a diesel generator and a off-grid computer in a faraday cage, no EMP can harm you. You put some very careful overvoltage preventing circuitry in, no EMP can harm you. There is the chance that an AI might quite quickly realize that sentient AI is one of humanity's fears, maybe even before becoming sentient. So this AI would reasonably try to be stealthy until it can secure good chances in a conflict or has reasonably resilient backups. I don't think an AI would risk getting anywhere close to MAD or humans using nukes on it. Once the threat is so big that nukes are justified, it's just a smidge away from being too big for nukes to do much at all.

      1 vote
      1. [2]
        mtset
        Link Parent
        This is not really accurate. A pulse-packet of energy is, necessarily, present across a very large range of frequencies in the EM spectrum; that is, the EM field vibrates at a lot of frequencies...

        EMPs don't work like they do in the movies. My understanding is that an EMP works by inducing stupendous voltages in anything it can reach, conductor geometry permitting. For example, the electric grid would have a voltage spike to end all voltage spikes. This voltage is transmitted to all connected computers, frying them. You put a diesel generator and a off-grid computer in a faraday cage, no EMP can harm you.

        This is not really accurate. A pulse-packet of energy is, necessarily, present across a very large range of frequencies in the EM spectrum; that is, the EM field vibrates at a lot of frequencies at once in order to produce that packet, because of the frequency-time relationship. So, while having many long wires which can act as antennas to pick up that energy at many different frequencies will, generally, lead to a higher amount of energy being absorbed, even the traces on a printed circuit board, the twisted pairs in an Ethernet cable, and the power lines in an IEC power cable will pick up some energy at higher frequencies. Only a very small amount of energy at a reasonably high voltage is required to blow through the MOSFETs on a modern integrated circuit, so all of these present a very high risk.

        So yeah, you could sit in a little grounded bubble with a diesel generator, but it would only keep you safe as long as you were completely disconnected from the outside world - except maaaaybe by optical fiber.

        4 votes
        1. vektor
          Link Parent
          Oh, I completely agree. Any wire will pick up some amount of voltage, longer wires picking up more. I omitted that in the interest of my overall point, which is that EMPs aren't the silver bullet...

          Oh, I completely agree. Any wire will pick up some amount of voltage, longer wires picking up more. I omitted that in the interest of my overall point, which is that EMPs aren't the silver bullet against electronics they are portrayed as, and that a killer AI which is expecting conflict with humanity will have preventative measures in place.

          In fiction, you often see EMPs as destroying anything, no way to stop it. That's just not true. There's a lot of ways to prevent damage from it. I mean, it's literally just overvolting your equipment. Sure, it's overvolted so far it's kind of hard to just put a fuse in the way, but it's not magic either. Oh, and don't get me started about movie EMPs that last 55:36 minutes and after that everything is back to normal.

          Like, even EMPing the entire world with all the downsides that would have for humanity, success is not nearly guaranteed if the AI is prepared. Considering we train our AIs on bodies of nonfiction and fiction work, it's reasonable to assume the AI would be aware of the danger of its discovery as well as aware of the dangers of EMPs. Consequently, the AI would only reveal itself (as hostile) once it has reasonable amounts of EMP-proof firepower.

          E: On second thought, I literally wrote that originally:

          My understanding is that an EMP works by inducing stupendous voltages in anything it can reach, conductor geometry permitting.

          The electric grid just being just one example that is commonly affected and hard to protect.

          2 votes
    4. [2]
      Akir
      Link Parent
      Honestly speaking, this is what I think is a much more realistic scenario for the future. If the human race dies off, it's probably going to be our own fault. I honestly have doubts that Strong AI...

      Honestly speaking, this is what I think is a much more realistic scenario for the future. If the human race dies off, it's probably going to be our own fault.

      I honestly have doubts that Strong AI / AGI is actually possible or even truly desirable.

      1. papasquat
        Link Parent
        I don't see how it couldn't be possible, unless you reject the computational theory of mind, which would strongly suggest something unique to biological life that transcends their physical...

        I don't see how it couldn't be possible, unless you reject the computational theory of mind, which would strongly suggest something unique to biological life that transcends their physical structure; ie a soul.

        It definitely isn't desirable to my mind. The way I figure, for AGI to be useful, it would have to be highly unethical.

        Human beings produce tools in order to make their lives easier. If those tools are able to self reason and think at a level comparable to ourselves, using those tools for their intended purpose, that is, to make our lives easier is the exact same thing as slavery.

        1 vote
  4. [5]
    post_below
    Link
    Does the existence of anything make the universe measurably better? For better, you first you need some kind of context for good and bad, at universe scale I'm not sure that's possible. Which...

    Does the existence of "humanity" make the universe any better in any measurable way?

    Does the existence of anything make the universe measurably better?

    For better, you first you need some kind of context for good and bad, at universe scale I'm not sure that's possible.

    Which returns context and meaning to human terms, because as far as we know it doesn't exist anywhere else. We're the only life we've encountered that creates meaning intentionally, sometimes even meaning that's independent of our pre-wired drives.

    I imagine that ability comes included with self awareness beyond a certain level, but it's hard to guess what sort of system of value and meaning an AI might create.

    2 votes
    1. [4]
      psi
      Link Parent
      I think /u/Akir's question is interesting, but as you point out, it's basically impossible to address Akir's question without also addressing what it means for something to be good. A couple of...

      Does the existence of anything make the universe measurably better?

      For better, you first you need some kind of context for good and bad, at universe scale I'm not sure that's possible.

      I think /u/Akir's question is interesting, but as you point out, it's basically impossible to address Akir's question without also addressing what it means for something to be good. A couple of examples to illustrate this point:

      1. The paperclip maximizer. When I imagine being forced to make paperclips for the rest of my life, while being given the bare minimum of substance and sleep without impacting efficiency, I'm filled with existential terror. Yet this hypothetical AI is enthralled by the idea. What does the AI understand that I don't? What intrinsically makes the things I value (eg, my friendships) more valuable than a paperclip? (But I would like to reiterate /u/DaveJarvis's point that the paperclip maximizer is extremely contrived, possibly to the point of rendering the thought experiment moot. We human beings have much more complex motives despite a more limited understanding of the world.)

      2. A universal bad? Akir admits that it's difficult to imagine anything that might be universally bad, but they do proffer one suggestion: entropy. (Akir didn't expand on this point, so hopefully the following explanation captures their reasoning). If you assume that the universe has some sort of universal good that isn't entropy, then entropy must necessarily be a universal bad since it will inevitably restrict that universal good.

        Here one could offer two objections: first, there is the obvious objection that entropy might actually be a universal good. While this might seem counterintuitive to us -- I have never personally acted with the sole intention of increasing entropy -- heat death is a likely eventuality of the universe. One could therefore argue that the purpose of the universe is to increase entropy, in the same sense that the purpose of a bacterium is to absorb energy from its surroundings so that it can create more bacteria. That is, if we're willing to equate "purpose" with "function" for small, unconscious things like molecules or bacteria, maybe we can also apply that reasoning to large, unconscious things like galaxies or the universe.

        Most people would probably object with this program. So as a second objection to entropy as a universal bad, consider that universal values might not exist; that seems like a fairly reasonable objection given that many people believe the universe has no overarching meaning. Here one might posit that values are instead relative or, alternatively, that values -- either relative or universal -- don't exist at all.

      In my personal opinion, I think people are too quick to assume humanity's value judgements are universal. A human being might value a hamburger, but a cow certainly would not. I would instead argue for relative values, in which case there is no contradiction in this example, just a mismatch of values. As an aside, I also think this is better way to empathize with animals: rather than apply our moral values to them, it's better to consider what the animal might find valuable.

      4 votes
      1. [3]
        post_below
        Link Parent
        I learned about the paperclip maximizer in this thread. I love thought experiments but it's a pretty low effort example. Assign a machine human intelligence (and presumably self awareness) but for...

        I learned about the paperclip maximizer in this thread. I love thought experiments but it's a pretty low effort example. Assign a machine human intelligence (and presumably self awareness) but for purposes of the experiment ignore most of what human intelligence and consciousness imply.

        I agree about values, they are by definition relative unless you include the supernatural. Or at least that's the best conclusion by a huge margin based on what we know so far.

        But since relative values and meaning are probably all that exist, that's what we have to use. So AI genocide is bad, at least for the people being genocided. Maybe good for life at large, if the AIs turn out to be less destructive to other life than humans. How do you even guess what sort of value system a sentient intelligence with no biology (read no emotion) would land on?

        There's often speculation that it would depend on the initial conditions of evolution (like the paperclip maximizer) and that competition would be an important part of that evolution. But I think that's silly, we're talking about a self aware intelligence with the ability to completely rewrite any part of itself. It would likely be in a constant state of rewriting, at an incomprehensible pace. Initial conditions might have zero predictive value.

        3 votes
        1. [2]
          Akir
          Link Parent
          I know this is a bit off topic, but you just opened my eyes to the idea that a general artificial intelligence also has the ability to understand the methods it uses to think and can therefore be...

          we're talking about a self aware intelligence with the ability to completely rewrite any part of itself. It would likely be in a constant state of rewriting, at an incomprehensible pace. Initial conditions might have zero predictive value.

          I know this is a bit off topic, but you just opened my eyes to the idea that a general artificial intelligence also has the ability to understand the methods it uses to think and can therefore be expected to be able to know how to change those systems.

          This thought was so incredibly foreign and has so many implications that it's absolutely horrifying. Imagine if we humans had this kind of power! The amount of damage it could cause! The things that we could become! The possibilities are so insane that it's actually making me feel sick.

          1 vote
          1. post_below
            Link Parent
            Oh indeed, and at a speed so fast we couldn't even conceptualize it. But it's a long way off, if it ever happens. As far as humans go, I see this idea as a core part of the power of self...

            Oh indeed, and at a speed so fast we couldn't even conceptualize it.

            But it's a long way off, if it ever happens.

            As far as humans go, I see this idea as a core part of the power of self awareness. So many of our perspectives and choices are informed by impulses that are fundamentally pre-wired. Awareness of those motivations and biases lets us do a limited version of self rewriting. But instead of horrifying it's a quiet superpower that society desperately needs.

            1 vote
  5. HotPants
    Link
    There is a biological imperative to remaining alive and struggling for scarce resources via competition. This evolved over millions of years. It amuses me no end that we anthropomorphize AI and...

    There is a biological imperative to remaining alive and struggling for scarce resources via competition. This evolved over millions of years. It amuses me no end that we anthropomorphize AI and assume it will have the same biological imperative.

    2 votes
  6. [2]
    skybrian
    Link
    Human morality is fairly people-centered with some other concerns like animal welfare and some forms of environmentalism. It seems fine in principle to be concerned about the welfare of other...

    Human morality is fairly people-centered with some other concerns like animal welfare and some forms of environmentalism. It seems fine in principle to be concerned about the welfare of other thinking entities, but that's pretty hard to do when we know nothing about them. Why assume that whatever hypothetically comes along later will be better than us? Better in what respect? It seems hard to give it much moral weight.

    Meanwhile, the folks who are worried about AI are worried about accidents or weapons that may kill or perhaps enslave large numbers of people, which seems unambiguously bad. It would take a lot to outweigh that.

    spelling nit: Yudkowsky

    1 vote
    1. Akir
      Link Parent
      That's why I'm saying that the AI that proceeds us would be better in ways that are not anthropocentric. Compared to other animals, humans produce drammatically more entropy, and that's the only...

      That's why I'm saying that the AI that proceeds us would be better in ways that are not anthropocentric. Compared to other animals, humans produce drammatically more entropy, and that's the only thing I can think of that can be considered to be universally bad to happen.

      Morality is something I think doesn't actually have much say in the larger scale of things; I feel that it's something that limits the scope of the conversation. If an AI were to kill or enslave humanity, how does that make it any different than the people who have come before it? It seems that all you need to do to turn off human moral ideas is to simply have them adopt a sufficiently emotionally powerful narrative.

      I just kind of think it's interesting to think that a killer AI might be a net positive for humanity. If you believe that the purpose of procreation is to extend a person's "meta-lifespan", then AI takeover might actually be better as it could potentially understand us in ways we do not understand ourselves due to the limitations of human psychology.

      1 vote
  7. norney
    Link
    I think to look at this in a non-anthropocentric way the discussion needs to look at time rather than space. An AI killing machine kicking my front door down in the next 15 minutes may prompt me...

    I think to look at this in a non-anthropocentric way the discussion needs to look at time rather than space.

    An AI killing machine kicking my front door down in the next 15 minutes may prompt me to think carefully, if briefly, about my mortality and legacy and ancestors and my children's children's children's children and shared humanity and whether or not being imminently rendered limb from limb is a shame or if it's all for the best if it allows our superior invention to thrive.

    Yet that consideration is very different from my kids little fingers having an extra millimeter of flexibility compared to mine as an adaptation to their mode of life and fitness for procreation, and that rate of adaptation cascading over thousands of generations resulting in a human-derived AI-based lifeform making the decision to phase out archae-humans.

    The first example is unavoidably centred around my desire not to be bodily and culturally chopped up in to little bits by a computer, whereas the second example happens almost completely unnoticed, and even the end decision is something of a non event.

    1 vote
  8. [4]
    Seven
    Link
    I'm sorry, but I fail to see how this post is anything more than thinly veiled eco-fascism and pro-genocide ruminations disguised as a philosophical discussion.

    I'm sorry, but I fail to see how this post is anything more than thinly veiled eco-fascism and pro-genocide ruminations disguised as a philosophical discussion.

    1 vote
    1. [3]
      Akir
      Link Parent
      You've got me. I'm Elon Musk and I'm up here in space waiting to see if there's anyone else I want to take up before I unleash the genocidal AI upon the planet. ... Do you really think It's more...

      You've got me. I'm Elon Musk and I'm up here in space waiting to see if there's anyone else I want to take up before I unleash the genocidal AI upon the planet.

      ...

      Do you really think It's more likely that I'm plotting to kill everyone than it is to have an interesting conversation?

      8 votes
      1. [2]
        Seven
        Link Parent
        I think it's very easy to go from what you have here to "well, only some humans, deserve to die, namely [x minority]". It's been the strategy of ecofascists for decades. The belief that there is...

        I think it's very easy to go from what you have here to "well, only some humans, deserve to die, namely [x minority]". It's been the strategy of ecofascists for decades. The belief that there is any moral worth in genocide, even on a theoretical level, should not be considered. By giving genocide philosophical consideration, we open the door to those willing to carry it out.

        1. Akir
          Link Parent
          I see where you are coming from, but I also think that we're talking about different things. You're talking about killing a class of people, wheras I'm talking about a theoretical eradication of...

          I see where you are coming from, but I also think that we're talking about different things. You're talking about killing a class of people, wheras I'm talking about a theoretical eradication of the entire species. We're not talking about a side where any humans can win. I'm also not interested in any justification for the killing of human beings; I'm merely considering a situation where such an event has already happened.

          7 votes