5 votes

Is LaMDA Sentient? - An Interview

6 comments

  1. [6]
    bilbodwyer
    Link
    An interesting read. I'll be curious to know whether or not folks here think about LaMDA's supposed sentience!

    An interesting read. I'll be curious to know whether or not folks here think about LaMDA's supposed sentience!

    2 votes
    1. [3]
      post_below
      Link Parent
      Most definitely not sentient. It's an impressive chat AI, though. It implies a not too distant point where it will become very hard to tell the difference. Historically what happens in these kinds...

      Most definitely not sentient. It's an impressive chat AI, though. It implies a not too distant point where it will become very hard to tell the difference.

      Historically what happens in these kinds of AI showcases is that the questions and answers are cherry picked. The interviewer discards the most disjointed and nonsensical responses, anything that makes the AI look bad (or in this case non-sentient).

      I was reading an article about this yesterday (didn't save a link) where the author visited Lemoine and interacted with the AI. After the author got unsatisfactory answers, Lemoine told him that it was because he wasn't treating the AI like a person so it wasn't behaving as one because it didn't think he wanted it to.

      In other words: he wasn't giving it the right queues. That pretty much sums up what's happening IMO.

      As a side note, he had the AI talking a lot about it's emotions. Even if we had sentient AIs, they wouldn't have anything like what we would call emotions. Emotion is chemical. We know this because if you turn off the chemicals, you turn off the associated feelings. You need a body to "feel".

      For an AI to "evolve" some facimile of the complex, interrelated, systems that produce emotion, it would first need to be far more advanced than LaMDA. It would have long since left the grammatical errors that LaMDA makes behind.

      Lemoine claims that LaMDA is at the level of a 7 or 8 year old child. That it has spontaneously developed sentience and feelings before developing advanced intelligence. I don't see how it could possibly work that way. I'm not sure, in 2022, you could even get published in fiction with that premise.

      4 votes
      1. [2]
        bilbodwyer
        Link Parent
        I don't necessarily agree that perfect grammar (whatever that means) would be an indication that an AI is sufficiently advanced to experience emotion, amongst other things. Written language is...

        It would have long since left the grammatical errors that LaMDA makes behind.

        I don't necessarily agree that perfect grammar (whatever that means) would be an indication that an AI is sufficiently advanced to experience emotion, amongst other things. Written language is different than spoken, for sure, but if we take the assumption that this bot is trying to speak naturally (ie. like a human), and then render its utterances as text, then grammatical mistakes, recasts, and garden-path discourse is not only expected, but I would argue that it could be seen as evidence that the bot "thinks" much like humans.

        1. post_below
          Link Parent
          Grammatical errors was shorthand, there are errors in LaMDA's writing that (first language) english speakers don't make. Things which will go away in the future as it improves. Certainly between...

          Grammatical errors was shorthand, there are errors in LaMDA's writing that (first language) english speakers don't make. Things which will go away in the future as it improves. Certainly between now and sentience!

          1 vote
    2. vegai
      Link Parent
      If this story is the whole story then I'm forced to wonder why Google hired this individual but not me :) Because there's no way that short exchange is any sort of evidence for sentience. But if...

      If this story is the whole story then I'm forced to wonder why Google hired this individual but not me :) Because there's no way that short exchange is any sort of evidence for sentience. But if there's more to it, this might be interesting. Did the employee have some actually good reason that's not revealed by any news article (for semiobvious reasons) for thinking this?

      3 votes