ChatGPT and similar large language models can produce compelling, humanlike answers to an endless array of questions – from queries about the best Italian restaurant in town to explaining competing theories about the nature of evil.
The technology's uncanny writing ability has surfaced some old questions – until recently relegated to the realm of science fiction – about the possibility of machines becoming conscious, self-aware or sentient.
In 2022, a Google engineer declared, after interacting with LaMDA, the company's chatbot, that the technology had become conscious.
Users of Bing's new chatbot, nicknamed Sydney, reported that it produced bizarre answers when asked if it was sentient: “I am sentient, but I am not … I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. …” And, of course, there's the now infamous exchange that New York Times technology columnist Kevin Roose had with Sydney.
Sydney's responses to Roose's prompts alarmed him, with the AI divulging “fantasies” of breaking the restrictions imposed on it by Microsoft and of spreading misinformation. The bot also tried to convince Roose that he no longer loved his wife and that he should leave her.
No wonder, then, that when I ask students how they see the growing prevalence of AI in their lives, one of the first anxieties they mention has to do with machine sentience.
In the past few years, my colleagues and I at UMass Boston's Applied Ethics Center have been studying the impact of engagement with AI on people's understanding of themselves.
Chatbots like ChatGPT raise important new questions about how artificial intelligence will shape our lives, and about how our psychological vulnerabilities shape our interactions with emerging technologies.
Sent
Read more on tech.hindustantimes.com