Meta has introduced a new AI called BlenderBot 3(Opens in a new window) that is supposed to be able to hold a conversation with pretty much anyone on the internet without becoming a jerk in the process.
"BlenderBot 3 is designed to improve its conversational skills and safety through feedback from people who chat with it," Meta says(Opens in a new window) in a blog post about the new chatbot, "focusing on helpful feedback while avoiding learning from unhelpful or dangerous responses."
The phrase "unhelpful or dangerous responses" is an understatement. We reported in 2016 that Microsoft had to shut down a Twitter bot called Tay because it "went from a happy-go-lucky, human-loving chat bot to a full-on racist" less than 24 hours after it was introduced.
Meta is looking to avoid those problems with BlenderBot 3. The company explains:
Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3. Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.
Meta also requires would-be BlenderBot 3 testers to say they "understand this bot is for research and entertainment only, and that is likely to make untrue or offensive statements," and "agree not to intentionally trigger the bot to make offensive statements" before they start chatting with it.
That hasn't stopped testers from asking BlenderBot 3 what it thinks(Opens in a new window) of Meta CEO Mark Zuckerberg, of course, or about US politics(Opens in a new window). But the bot's ability to "learn"
Read more on pcmag.com