As experts worry about AI-powered chatbots generating factual errors and propaganda, in a bit of irony, misinformation is starting to smear the reputation of Microsoft’s AI-powered Bing.
Social media, particularly Twitter and Reddit, has seen a surge of screenshots showing the outlandish responses the ChatGPT-powered Bing can occasionally generate for users. But according to Microsoft, at least some of these screenshots appear to be fake.
Frank Shaw, the company’s chief communications officer, took to Twitter on Wednesday to point out the problem. “If you are wondering if some of these ‘Bing Chat’ conversations flying around on here (Twitter) are real, let's just say we are seeing edited/photoshopped screenshots which probably never happened,” he wrote(Opens in a new window).
Shaw then cited a Reddit post(Opens in a new window), which claimed to show the AI-powered Bing seeking to place the user on an “FBI watch list” after misinterpreting a query. To do so, Bing allegedly began searching for “child pornography” during the chat session.
While it’s true Bing can post inaccuracies and some emotionally bizarre responses during long chat sessions, Shaw says some of the screenshots circulating online go beyond what’s plausible for the AI-powered chatbot, which also strictly prohibits searches for child pornography.
“We have certainly seen some that are fake, which was what drove my tweet,” Shaw told PCMag. “We’re trying to make sure there is some awareness of the potential early. You can run the queries cited in the screen shots and pretty quickly get a sense of if they are likely.”
The warning from Microsoft highlights how misinformation can easily infect any topic these days, thanks to the viral nature of social
Read more on pcmag.com