Earlier this year, Princeton Computer Science Professor Arvind Narayanan set up a voice interface to ChatGPT for his nearly four-year-old daughter. It was partly an experiment and partly because he believed AI agents would one day be a big part of her life.
Narayanan's daughter was naturally curious, often asking about animals, plants and the human body, and he thought ChatGPT could give useful answers to her questions, he told me. To his surprise, the chatbot developed by OpenAI also did an impeccable job at showing empathy, once he told the system it was speaking to a small child.
“What happens when the lights turn out?” his daughter asked.
“When the lights turn out, it gets dark, and it can be a little scary,” ChatGPT responded through an synthetic voice. “But don't worry! There are lots of things you can do to feel safe and comfortable in the dark.”
It then gave some advice on using nightlights, closing with a reminder that “it's normal to feel a bit scared in the dark.” Narayanan's daughter was visibly reassured by the explanation, he wrote in a Substack post.
Microsoft Corp. and Alphabet Inc.'s Google are rushing to enhance their search engines with the large language model technology that underpins ChatGPT — but there is good reason to think the technology works better as an emotional companion than as a provider of facts.
That might sound weird, but what's weirder is that Google's Bard and Microsoft's Bing, which is based on ChatGPT's underlying technology, are being positioned as search tools when they have an embarrassing history of factual errors: Bard gave incorrect information about the James Webb Telescope in its very first demo while Bing goofed on a series of financial figures in its own.
The cost of factual
Read more on tech.hindustantimes.com