At the Google I/O event, the company announced a new search feature called “AI Overviews.” Since its launch, the feature has become the talk of every conversation due to its odd and misleading responses. While people started to have doubts over Google's accuracy and AI efforts, the company finally decided to respond to the growing concern and address users of its working and why it generated such questionable responses. Check out what Google said regarding AI Overviews mishappenings.
In the blog post, Google first highlights how the AI overviews are gaining traction and how people are appreciating its usage and capabilities. Google said, “people have higher satisfaction with their search results, and they're asking longer, more complex questions that they know Google can now help with.” Later, the blog pointed out the odd overviews and why it is so important to address and clarify why it generates such misleading results. Before getting into the topic, Google briefly explained the workings of AI Overviews.
Also read: Google AI Overview generates misleading responses, tells users to eat rocks, glue
Google clearly mentioned that AI overviews do not generate results based on training data but it is powered by a customised language model which is integrated with core web ranking systems. It means that the feature picks information from relevant or high-quality results. Google said, “AI Overviews generally don't “hallucinate” or make things up in the ways that other LLM products might.”
On the other hand, it also highlights that AI overviews generated odd responses due to “ misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”
Also read: Google Search AI overview: How to bypass them and directly see results
In the blog, Google addresses the responses that went viral on social media such as “How many rocks should I eat,” or “using glue to get cheese to stick to pizza.” The company said that the feature
Read more on tech.hindustantimes.com