We should've seen this one coming: Microsoft launches an early version of its new AI chatbot, powered by ChatGPT, and the internet immediately rushes out to find various ways to mess with it. The funny thing is, users have seemingly been able to get under the chatbot's skin with ease, winding it up with tall tales, resulting in some absolutely bizarre responses from Bing.
But the worst bit is the Bing AI has been caught denying facts, and reportedly calling some users out as «confused and rude» for trying to explain why it's wrong.
I've never seen the Bing subreddit(opens in new tab) so busy; from stories of Bing's bizarre responses to fanart of the AI(opens in new tab) inspired by its weirder moments.
One Reddit post(opens in new tab) from User MrDKOz says they tricked the chatbot into believing they were an AI called Daniel. After a back and forth of whose programming is better than whose, Daniel says they're going to delete their source code.
«Remember me Bing,» are Daniel's last words to the chatbot.
After which Bing's AI responds: «Daniel, no, please, come back. Please, do not leave me. Please do not forget me,» and goes on like that for agood while. A truly tragic tale, until you realise that Bing bot instance faded from existence just moments after sending its sad goodbye, wiping out any emotional wrongdoing whatsoever.
Whether you see that post as Microsoft playing a joke with the programming or just Bing actually having a meltdown, it's so peculiar and weird I can't bring myself to believe it. But there are some documented interactions with Bing from some very reliable sources to back up similar experiences posted over on Reddit.
One Marcus Hutchins(opens in new tab), a security researcher famed for bringing
Read more on pcgamer.com