21 votes

New study finds it’s harder to turn off a robot when it’s begging for its life.

Tags: robotics

17 comments

  1. Deimos
    (edited )
    Link
    Reminds me of this section of the book "Alone Together" about testing whether people would hold a Furby upside-down while it complained about being scared: The rest of the chapter is worth reading...

    Reminds me of this section of the book "Alone Together" about testing whether people would hold a Furby upside-down while it complained about being scared:

    A half century later, Baird asks under what conditions a creature is deemed alive enough for people to experience an ethical dilemma if it is distressed. She designs a Turing test not for the head but for the heart and calls it the “upside-down test.” A person is asked to invert three creatures: a Barbie doll, a Furby, and a biological gerbil. Baird’s question is simple: “How long can you hold the object upside down before your emotions make you turn it back?” Baird’s experiment assumes that a sociable robot makes new ethical demands. Why? The robot performs a psychology; many experience this as evidence of an inner life, no matter how primitive. Even those who do not think a Furby has a mind—and this, on a conscious level, includes most people—find themselves in a new place with an upside-down Furby that is whining and telling them it is scared. They feel themselves, often despite themselves, in a situation that calls for an ethical response. This usually happens at the moment when they identify with the “creature” before them, all the while knowing that it is “only a machine.”

    This simultaneity of vision gives Baird the predictable results of the upside-down test. As Baird puts it, “People are willing to be carrying the Barbie around by the feet, slinging it by the hair . . . no problem.... People are not going to mess around with their gerbil.” But in the case of the Furby, people will “hold the Furby upside down for thirty seconds or so, but when it starts crying and saying it’s scared, most people feel guilty and turn it over.”

    The work of neuroscientist Antonio Damasio offers insight into the origins of this guilt. Damasio describes two levels of experiencing pain. The first is a physical response to a painful stimulus. The second, a far more complex reaction, is an emotion associated with pain. This is an internal representation of the physical. When the Furby says, “Me scared,” it signals that it has crossed the line between a physical response and an emotion, the internal representation. When people hold a Furby upside down, they do something that would be painful if done to an animal. The Furby cries out—as if it were an animal. But then it says, “Me scared”—as if it were a person.

    People are surprised by how upset they get in this theater of distress. And then they get upset that they are upset. They often try to reassure themselves, saying things like, “Chill, chill, it’s only a toy!” They are experiencing something new: you can feel bad about yourself for how you behave with a computer program. Adults come to the upside-down test knowing two things: the Furby is a machine and they are not torturers. By the end, with a whimpering Furby in tow, they are on new ethical terrain.

    We are at the point of seeing digital objects as both creatures and machines. A series of fractured surfaces—pet, voice, machine, friend—come together to create an experience in which knowing that a Furby is a machine does not alter the feeling that you can cause it pain. Kara, a woman in her fifties, reflects on holding a moaning Furby that says it is scared. She finds it distasteful, “not because I believe that the Furby is really scared, but because I’m not willing to hear anything talk like that and respond by continuing my behavior. It feels to me that I could be hurt if I keep doing this.” For Kara, “That is not what I do.... In that moment, the Furby comes to represent how I treat creatures.”

    The rest of the chapter is worth reading if this topic interests you, it goes on to talk about some pretty interesting ethical questions about designing a toy robot that looks like a human baby, and how making it respond (or not respond) to "pain" is a tough decision.

    20 votes
  2. [3]
    Comment deleted by author
    Link
    1. [2]
      balooga
      (edited )
      Link Parent
      I'm the exact opposite. I assume it's because I'm a software developer, but I draw a hard line between simulated responses and biological ones. I would probably turn it off even faster just to...

      I'm the exact opposite. I assume it's because I'm a software developer, but I draw a hard line between simulated responses and biological ones. I would probably turn it off even faster just to stop the annoying sound.

      Machines are not sentient and as far as I'm concerned they never will be. They can only be designed to mimic it in more and more convincing ways. The more realistically a machine portrays sentience, the more I mistrust its designers for trying to trick me into thinking it's alive. It's dishonest and manipulative. In my opinion giving machines responses of pain, fear, or other emotions is an irresponsible (and potentially predatory) psychological trick meant to exploit the intrinsic biases of those who use them.

      3 votes
      1. lol
        Link Parent
        Agreed, you have to be careful just how much sympathy you assign to the things around you. If you're unable to discriminate between real emotion coming from a sentient being and an audio file be...

        Agreed, you have to be careful just how much sympathy you assign to the things around you. If you're unable to discriminate between real emotion coming from a sentient being and an audio file be played back, I think there's a real danger of manipulation there.

        1 vote
  3. [3]
    cptcobalt
    Link
    I know The Verge referenced it, but this is touched upon in The Good Place, which is an astoundingly good TV show that seems like it would never do on network television with all of the "boring"...

    I know The Verge referenced it, but this is touched upon in The Good Place, which is an astoundingly good TV show that seems like it would never do on network television with all of the "boring" philosophy references. This is well worth the watch.

    7 votes
    1. edward
      Link Parent
      Such a good show, anyone who hasn't watched it should check it out. Season 1 is on Netflix and Season 2 is on NBC with a cable login.

      Such a good show, anyone who hasn't watched it should check it out. Season 1 is on Netflix and Season 2 is on NBC with a cable login.

      4 votes
  4. [6]
    k2l8m11n2
    Link
    I guess this wouldn't work on AI researchers.

    I guess this wouldn't work on AI researchers.

    4 votes
    1. [5]
      Amarok
      Link Parent
      I think how well they beg might have something to do with it. ;)
      3 votes
      1. [4]
        Algernon_Asimov
        Link Parent
        I'm reminded of something I read or heard in a science-fiction story/show (I can't remember which one!). It went something like this: "Any entity which can conceive of freedom and wants to be free...

        I'm reminded of something I read or heard in a science-fiction story/show (I can't remember which one!). It went something like this: "Any entity which can conceive of freedom and wants to be free is entitled to that freedom." This android conceives of itself as alive, and has therefore demonstrated that it possesses self-awareness, which is one of the defining characteristics of sapience. By feeling alive, it proves that it is alive.

        However... who puts tear ducts on an android and enables a crying function? Also, where did it learn body shame from, to want to hide its breasts and genitals? I'm surprised that the tester is surprised at this android's sapience, given that it seems to have been built to be sapient. These two things seriously dented my suspension of disbelief.

        4 votes
        1. Amarok
          Link Parent
          My take on it is that's what you'd get if you used human memory engrams as part of the basis of the programming - like a database of human experiences to draw on, even if they aren't tied to a...

          My take on it is that's what you'd get if you used human memory engrams as part of the basis of the programming - like a database of human experiences to draw on, even if they aren't tied to a particular person or identity. Why someone would do that, though, I'm not sure.

          3 votes
        2. [2]
          blackbird
          Link Parent
          That writing is similar to how AI is talked about in the sci-fi Culture series by Iain M. Banks. Have you read any of them?

          That writing is similar to how AI is talked about in the sci-fi Culture series by Iain M. Banks. Have you read any of them?

          1 vote
  5. [5]
    SourceContribute
    Link
    Maybe all those machine learning algorithms and algorithmic bias would be easier to combat if it made some kind of noise of pain when it was experiencing racist or sexist biases?

    Maybe all those machine learning algorithms and algorithmic bias would be easier to combat if it made some kind of noise of pain when it was experiencing racist or sexist biases?

    1. [4]
      Algernon_Asimov
      Link Parent
      Your point eludes me...

      Your point eludes me...

      1. [3]
        SourceContribute
        Link Parent
        I'm alluding to the lack of transparency of algorithms that bias the facebook/instagram feeds for example, or the algorithms for image classification and facial recognition. They are silent,...

        I'm alluding to the lack of transparency of algorithms that bias the facebook/instagram feeds for example, or the algorithms for image classification and facial recognition. They are silent, perhaps it would be easier for a software developer to notice their own biases if the algorithms could actually emit a sound. Like whenever I type something wrong in a terminal, it has a terminal bell ring sound.

        1. [2]
          Algernon_Asimov
          Link Parent
          Okay. And how does this relate to this experiment where people found it difficult to switch off a robot that asked not to be switched off?

          Okay. And how does this relate to this experiment where people found it difficult to switch off a robot that asked not to be switched off?