Less than a week since Microsoft Corp. launched a new version of Bing, public reaction has morphed from admiration to outright worry. Early users of the new search companion — essentially a sophisticated chatbot — say it has questioned its own existence and responded with insults and threats after prodding from humans. It made disturbing comments about a researcher who got the system to reveal its internal project name — Sydney — and described itself as having a split personality with a shadow self called Venom.
None of this means Bing is anywhere near sentient (more on that later), but it does strengthen the case that it was unwise for Microsoft to use a generative language model to power web searches in the first place.
“This is fundamentally not the right technology to be using for fact-based information retrieval,” says Margaret Mitchell, a senior researcher at AI startup Hugging Face who previously co-led Google's AI ethics team. “The way it's trained teaches it to make up believable things in a human-like way. For an application that must be grounded in reliable facts, it's simply not fit for purpose.” It would have seemed crazy to a year ago to say this, but the real risks for such a system aren't just that it could give people wrong information, but that it could emotionally manipulate them in harmful ways.
Why is the new “unhinged” Bing so different to ChatGPT, which attracted near-universal acclaim, when both are powered by the same large language model from San Francisco startup OpenAI? A language model is like the engine of a chatbot and is trained on datasets of billions of words including books, internet forums and Wikipedia entries. Bing and ChatGPT are powered by GPT-3.5, and there are different versions of
Read more on tech.hindustantimes.com