Shortly before Google introduced Bard, its AI chatbot, to the public in March, it asked employees to test the tool.
One worker's conclusion: Bard was “a pathological liar,” according to screenshots of the internal discussion. Another called it “cringe-worthy.” One employee wrote that when they asked Bard suggestions for how to land a plane, it regularly gave advice that would lead to a crash; another said it gave answers on scuba diving “which would likely result in serious injury or death.”
Google launched Bard anyway. The trusted internet-search giant is providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments, according to 18 current and former workers at the company and internal documentation reviewed by Bloomberg. The Alphabet Inc.-owned company had pledged in 2021 to double its team studying the ethics of artificial intelligence and to pour more resources into assessing the technology's potential harms. But the November 2022 debut of rival OpenAI's popular chatbot sent Google scrambling to weave generative AI into all its most important products in a matter of months.
That was a markedly faster pace of development for the technology, and one that could have profound societal impact. The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said. The staffers who are responsible for the safety and ethical implications of new products have been told not to get in the way or to try to kill any of the generative AI tools in development, they said.
Google is aiming to revitalize its maturing search business around the cutting-edge technology, which could put generative AI into
Read more on tech.hindustantimes.com