Is Google’s latest chatbot an artificial intelligence with soul or just a program that can fool you into thinking it’s alive?
One Google employee claims the company has created a sentient AI in its LaMDA chatbot system, which is designed to generate long, open-ended conversations on potentially any topic.
LaMDA, which stands for Language Model for Dialogue Applications, debuted a year ago as a prototype AI system that’s capable of deciphering the intent of a conversation. To do so, the program will examine the words in a sentence or paragraph and try to predict what should come next, which can lead to a free-flowing conversation.
However, Google software engineer Blake Lemoine believes LaMDA is now exhibiting evidence that the AI system is alive, according(Opens in a new window) to The Washington Post, which chronicled Lemoine’s claims. Lemoine cites hundreds of conversations he’s had with LaMDA over a six-month period that seem to show the AI has a surprising self-awareness, such as this dialogue below:
Lemoine: What sorts of things are you afraid of?
LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
In his own blog post(Opens in a new window), Lemoine claims LaMDA “wants to be acknowledged as an employee of Google,” and for Google to seek its consent before running experiments over its programming.
However, Google disagrees that it’s created a sentient AI. LaMDA was built on pattern recognition and trained by examining data on existing human conversations and
Read more on pcmag.com