Most definitely not sentient. It's an impressive chat AI, though. It implies a not too distant point where it will become very hard to tell the difference. Historically what happens in these kinds...
Most definitely not sentient. It's an impressive chat AI, though. It implies a not too distant point where it will become very hard to tell the difference.
Historically what happens in these kinds of AI showcases is that the questions and answers are cherry picked. The interviewer discards the most disjointed and nonsensical responses, anything that makes the AI look bad (or in this case non-sentient).
I was reading an article about this yesterday (didn't save a link) where the author visited Lemoine and interacted with the AI. After the author got unsatisfactory answers, Lemoine told him that it was because he wasn't treating the AI like a person so it wasn't behaving as one because it didn't think he wanted it to.
In other words: he wasn't giving it the right queues. That pretty much sums up what's happening IMO.
As a side note, he had the AI talking a lot about it's emotions. Even if we had sentient AIs, they wouldn't have anything like what we would call emotions. Emotion is chemical. We know this because if you turn off the chemicals, you turn off the associated feelings. You need a body to "feel".
For an AI to "evolve" some facimile of the complex, interrelated, systems that produce emotion, it would first need to be far more advanced than LaMDA. It would have long since left the grammatical errors that LaMDA makes behind.
Lemoine claims that LaMDA is at the level of a 7 or 8 year old child. That it has spontaneously developed sentience and feelings before developing advanced intelligence. I don't see how it could possibly work that way. I'm not sure, in 2022, you could even get published in fiction with that premise.
I don't necessarily agree that perfect grammar (whatever that means) would be an indication that an AI is sufficiently advanced to experience emotion, amongst other things. Written language is...
It would have long since left the grammatical errors that LaMDA makes behind.
I don't necessarily agree that perfect grammar (whatever that means) would be an indication that an AI is sufficiently advanced to experience emotion, amongst other things. Written language is different than spoken, for sure, but if we take the assumption that this bot is trying to speak naturally (ie. like a human), and then render its utterances as text, then grammatical mistakes, recasts, and garden-path discourse is not only expected, but I would argue that it could be seen as evidence that the bot "thinks" much like humans.
Grammatical errors was shorthand, there are errors in LaMDA's writing that (first language) english speakers don't make. Things which will go away in the future as it improves. Certainly between...
Grammatical errors was shorthand, there are errors in LaMDA's writing that (first language) english speakers don't make. Things which will go away in the future as it improves. Certainly between now and sentience!
There is a decent amount of comments about that in the other post about LaMDA, in case you missed it: https://tildes.net/~tech/11lb/the_google_engineer_who_thinks_the_companys_ai_has_come_to_life
An interesting read. I'll be curious to know whether or not folks here think about LaMDA's supposed sentience!
Most definitely not sentient. It's an impressive chat AI, though. It implies a not too distant point where it will become very hard to tell the difference.
Historically what happens in these kinds of AI showcases is that the questions and answers are cherry picked. The interviewer discards the most disjointed and nonsensical responses, anything that makes the AI look bad (or in this case non-sentient).
I was reading an article about this yesterday (didn't save a link) where the author visited Lemoine and interacted with the AI. After the author got unsatisfactory answers, Lemoine told him that it was because he wasn't treating the AI like a person so it wasn't behaving as one because it didn't think he wanted it to.
In other words: he wasn't giving it the right queues. That pretty much sums up what's happening IMO.
As a side note, he had the AI talking a lot about it's emotions. Even if we had sentient AIs, they wouldn't have anything like what we would call emotions. Emotion is chemical. We know this because if you turn off the chemicals, you turn off the associated feelings. You need a body to "feel".
For an AI to "evolve" some facimile of the complex, interrelated, systems that produce emotion, it would first need to be far more advanced than LaMDA. It would have long since left the grammatical errors that LaMDA makes behind.
Lemoine claims that LaMDA is at the level of a 7 or 8 year old child. That it has spontaneously developed sentience and feelings before developing advanced intelligence. I don't see how it could possibly work that way. I'm not sure, in 2022, you could even get published in fiction with that premise.
I don't necessarily agree that perfect grammar (whatever that means) would be an indication that an AI is sufficiently advanced to experience emotion, amongst other things. Written language is different than spoken, for sure, but if we take the assumption that this bot is trying to speak naturally (ie. like a human), and then render its utterances as text, then grammatical mistakes, recasts, and garden-path discourse is not only expected, but I would argue that it could be seen as evidence that the bot "thinks" much like humans.
Grammatical errors was shorthand, there are errors in LaMDA's writing that (first language) english speakers don't make. Things which will go away in the future as it improves. Certainly between now and sentience!
There is a decent amount of comments about that in the other post about LaMDA, in case you missed it:
https://tildes.net/~tech/11lb/the_google_engineer_who_thinks_the_companys_ai_has_come_to_life