Google's AI Overview search feature has had, I think it's fair to say, quite a rocky start. From recommending users drink urine (light in color), to suggesting the addition of glue to a pizza sauce, we've all had a laugh at the AI's expense. In between bouts of existential dread and genuine concern over the serious damage this sort of advice could wreak upon users looking for factual results, of course.
Now Google has responded, and it turns out that actually the whole thing was really a big success (via The Verge). Liz Reid, head of Google search, says that «user feedback shows that…people have higher satisfaction with their search results», and that really, these erroneous results are simply down to the poor AI responding to «nonsensical queries and satirical content».
You should all be very ashamed of yourselves for asking silly questions, I guess. Anyway, Reid gives the example of a much-mocked search query response, «How many rocks should I eat», which the AI responded to by referencing a satirical article from The Onion, advising that you should eat at least one small rock a day.
This, Reid explains, is what is referred to as a «data void» or «information gap», where Google can only pull from a limited amount of high-quality content about a specific topic. Given that the only information the AI could reference was a satirical article that was also republished on a geological software provider's website, the AI Overview «faithfully linked to one of the only websites that tackled the question».
Since the launch, Google has apparently been busy working on updates that should prevent these sorts of results from appearing in future. These include better detection mechanisms that shouldn't show an AI overview for «nonsensical queries», limits on the usage of user-generated content in responses, and «additional triggering refinements to enhance our quality protections.»
There's also an interesting line in regards to news coverage: «We aim to not show AI Overviews
Read more on pcgamer.com