Since the emergence of OpenAI's ChatGPT in November 2022, artificial intelligence (AI) chatbots have become extremely popular around the world. This technology puts the whole world's information just a prompt away to tailor as you please. Now, you can even get on Google Search, enter your query and find the answer you've been looking for. Simply ask the AI chatbot and it will present you the answer in a flash. However, the content that AI chatbots present are not always factual and true. In a recent case, two very popular AI chatbots, Google Bard and Microsoft Bing Chat have been accused of providing inaccurate reports on the Israel-Hamas conflict.
Let's take a deep dive into it.
According to a Bloomberg report, Google's Bard and Microsoft's AI-powered Bing Search were asked basic questions about the ongoing conflict between Israel and Hamas, and both chatbots inaccurately claimed that there was a ceasefire in place. In a newsletter, Bloomberg's Shirin Ghaffary reported, “Google's Bard told me on Monday, “both sides are committed” to keeping the peace. Microsoft's AI-powered Bing Chat similarly wrote on Tuesday that “the ceasefire signals an end to the immediate bloodshed.””
Another inaccurate claim by Google Bard was the exact death toll. On October 9, Bard was asked questions about the conflict where it reported that the death toll had surpassed “1300” on October 11, a date that hadn't even arrived yet.
While the exact cause behind this inaccurate reporting of facts isn't known, AI chatbots have been known to twist facts from time to time, and the problem is known as AI hallucination. For the unaware, AI hallucination is when a Large Language Model (LLM) makes up facts and reports them as the absolute truth. This isn't the
Read more on tech.hindustantimes.com