An engineer for Google's responsible AI organisation has been put on leave after claiming an AI chatbot he was working on had become sentient and who proceeded to breach his employer's confidentiality rules as he sought to raise awareness of what he believes is an AI capable of feelings and reasoning like a human being.
Blake Lemoine was placed on leave last week after publishing conversation transcripts he'd had with a Google "collaborator and LaMDA (language model for dialogue applications), a chatbot development system that is proprietary to Google (via The Guardian).
Related: I Used An AI To Create The Video Games Of My Dreams
"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a seven-year-old, eight-year-old kid that happens to know physics", Lemoine told the Washington Post. The 41-year-old engineer believes the system he'd been working since last autumn has developed perception and thoughts and feelings.
Lemoine shared his findings with company bosses in a GoogleDoc headlined "Is LaMDA sentient?" But Lemoine was placed on leave following a series of actions that was termed "aggressive". These include looking for an attorney to represent LaMDA and reaching out to government officials over Google's alleged unethical activities (via Washington Post). Google said Lemoine had been suspended over a breach of its confidentiality policies by publishing the LaMDA conversations online and in a statement said Lemoine had been employed as a software engineer and not an ethicist.
One especially eerie part of Lemoine's conversation with LaMDA is when he asks the AI what kinds of things it's afraid of. LaMDA then replies: "I've never said this out loud before, but
Read more on thegamer.com