Generative AI is changing the way the world works. It has affected a multitude of industries and as it keeps getting smarter, the chances are its scope of impact will also increase. And the inception of this new technology, in many ways, came in November 2022, when OpenAI made the public release of ChatGPT. However, even the founder and CEO of the company that made this technology, Sam Altman, is not convinced of its accuracy. In a session at the Indraprastha Institute of Information Technology, Delhi, Altman said jokingly, “I trust the answers that come out of ChatGPT the least on Earth”.
Those who deal with generative AI understand the critical issue which is called AI hallucinations. In essence, refers to a confident response by an AI that does not seem to be justified by the data, either because it is insufficient, biased, or inaccurate.
The issue can be problematic since generative AI is often used to create content including news articles, analysis pieces, and more. Instances of AI hallucinations can be a big problem in such cases. In fact, ChatGPT itself is not free from this problem. That is why Altman said that line in jest.
But he addresses the issue with a serious response as well. When asked about the AI hallucination problem in ChatGPT and other GPT-based models, he said, “The problem is real. And we are working to improve it. It will take us about a year to perfect the model. It is a balance between creativity and accuracy and we are trying to minimize the latter”.
He also addressed the challenge of making the AI safe. Explaining what OpenAI does to ensure the creation of a safe and responsible AI, Altman said, “There is not a single solution to make AI safe. We improve the algorithm, conduct audits, work on
Read more on tech.hindustantimes.com