Google’s AI Overviews are “hallucinating” false information and drawing clicks away from accurate sources, experts warned The Times of London late last week.
Google introduced its AI Overviews, a feature that aims to provide quick answers to search queries, in May 2024. Summaries are written by Google’s Gemini AI – a large language model similar to ChatGPT – scans through the results of the search to create the graphs and includes links to some of the sources.
Don’t Miss:
Google Vice President of Search Elizabeth Reid said In a blog post that the overviews were designed to be a “jumping off point” that provided higher-quality clicks to webpages. “People are more likely to stay on [those pages], because we’ve done a better job of finding the right info and helpful webpages for them.”
However, experts told The Times of London that these answers can be “confidently wrong” and direct searchers away from legitimate information.
When generative AI imagines facts or otherwise makes mistakes, computer scientists refer to it as hallucinating. These hallucinations can include references to non-existent scientific papers, like those NOTUS found were cited in Health and Human Services Secretary Robert F. Kennedy Jr.’s “Make America Healthy Again” report, and a host of other errors in judgment.
Trending: Invest where it hurts — and help millions heal: Invest in Cytonics and help disrupt a $390B Big Pharma stronghold.
Shortly after AI Overviews were launched last year, users began to point out the frequency with which these summaries included inaccurate information, the Times of London reports. One of its most notorious hallucinations was the suggestion that a user add non-toxic glue to pizza sauce to help the cheese stick better.
Google pushed back, claiming that many of the examples circulating were fake, but Reid acknowledged in her blog post that “some odd, inaccurate or unhelpful AI Overviews certainly did show up. And while these were generally for queries that people don’t commonly do, it highlighted some specific areas that we needed to improve.”
According to the experts who spoke to The Times of London, despite the technological advancements and improvements, hallucinations are getting worse rather than better. New reasoning systems are producing more incorrect responses than their predecessors, and designers aren’t sure why.
Story Continues