According to internal documents reviewed by Bloomberg(opens in new tab), several Google employees raised concerns that its Bard(opens in new tab) AI chatbot was not ready for its March release, citing low-quality, incorrect, and potentially dangerous answers. Two quotes about Bard allegedly from employees: «pathological liar» and «cringe-worthy.»
Bard(opens in new tab) is Google's answer to OpenAI's ChatGPT(opens in new tab). CEO Sundar Pichai said it «combines the breadth of the world’s knowledge with the power, intelligence, and creativity of our large language models.» But Bloomberg reports that Google rushed Bard out the door in order to compete with ChatGPT in what the company allegedly called a competitive «code red.»
According to Bloomberg's report, an employee sent a message to an internal message group that was viewed by 7,000 employees saying, “Bard is worse than useless: please do not launch.” Right before the launch, Google's AI governance lead, Jen Gennai, reportedly overruled a risk evaluation by her own team when they said that Bard's answers could potentially be harmful.
Bloomberg reports a couple of examples: Suggestions on landing a plane that would result in a crash and scuba diving instructions that could have resulted in «serious injury or death.»
Meredith Whittaker, a former manager at Google, told Bloomberg that «AI ethics has taken a back seat» at the company
ChatGPT has its own issues with the truth(opens in new tab) and proper sourcing of information it scrapes for answers. Currently, Google refers to Bard as an «experiment» and, in Bloomberg's words, maintains that «responsible AI is a top priority» for the company. As an experiment, I asked Google Bard if its advice is potentially dangerous,
Read more on pcgamer.com