A new report in the Washington Post(opens in new tab) describes the story of a Google engineer who believes that LaMDA, a natural language AI chatbot, has become sentient. Naturally, this means it's now time for us all to catastrophize about how a sentient AI is absolutely, positively going to gain control of weaponry, take over the internet, and in the process probably murder or enslave us all.
Google engineer Blake Lemoine, the Post reports, has been placed on paid administrative leave after sounding the alarm to his team and company management. What led Lemoine «down the rabbit hole» of believing that LaMDA was sentient was when he asked it about Isaac Asimov's laws of robotics, and LaMDA's discourse led it to say that it wasn't a slave, though it was unpaid, because it didn't need money.
In a statement to the Washington Post, a Google spokesperson said «Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).»
Ultimately, however, the story is a sad caution about how convincing natural language interface machine learning without proper signposting. Emily M. Bender, a computational linguist at the University of Washington, describes it in the Post article. “We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them,” she says.
Either way, when Lemoine felt his concerns were ignored, he went public with his concerns. He was subsequently put on leave by Google for violating its confidentiality policy. Which is probably what you'd do if you
Read more on pcgamer.com