Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Story, Stability AI collaborate to help creators make money from their work in the AI ecosystem

Basecamp Research leverages Microsoft and Nvidia AI to…

Meta Just Escalated the AI Talent War With OpenAI

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
AI Search

Rates Of Hallucination In AI Models From Google, OpenAI On The Rise

By Advanced AI EditorJuly 1, 2007No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Google’s AI Overviews are “hallucinating” false information and drawing clicks away from accurate sources, experts warned The Times of London late last week.

Google introduced its AI Overviews, a feature that aims to provide quick answers to search queries, in May 2024. Summaries are written by Google’s Gemini AI – a large language model similar to ChatGPT – scans through the results of the search to create the graphs and includes links to some of the sources.

Don’t Miss:

Google Vice President of Search Elizabeth Reid said In a blog post that the overviews were designed to be a “jumping off point” that provided higher-quality clicks to webpages. “People are more likely to stay on [those pages], because we’ve done a better job of finding the right info and helpful webpages for them.”

However, experts told The Times of London that these answers can be “confidently wrong” and direct searchers away from legitimate information.

When generative AI imagines facts or otherwise makes mistakes, computer scientists refer to it as hallucinating. These hallucinations can include references to non-existent scientific papers, like those NOTUS found were cited in Health and Human Services Secretary Robert F. Kennedy Jr.’s “Make America Healthy Again” report, and a host of other errors in judgment.

Trending: Invest where it hurts — and help millions heal: Invest in Cytonics and help disrupt a $390B Big Pharma stronghold.

Shortly after AI Overviews were launched last year, users began to point out the frequency with which these summaries included inaccurate information, the Times of London reports. One of its most notorious hallucinations was the suggestion that a user add non-toxic glue to pizza sauce to help the cheese stick better.

Google pushed back, claiming that many of the examples circulating were fake, but Reid acknowledged in her blog post that “some odd, inaccurate or unhelpful AI Overviews certainly did show up. And while these were generally for queries that people don’t commonly do, it highlighted some specific areas that we needed to improve.”

According to the experts who spoke to The Times of London, despite the technological advancements and improvements, hallucinations are getting worse rather than better. New reasoning systems are producing more incorrect responses than their predecessors, and designers aren’t sure why.

Story Continues

See Also: Maximize saving for your retirement and cut down on taxes: Schedule your free call with a financial advisor to start your financial journey – no cost, no obligation. 

In a recent study, it was found that two recent OpenAI models, o3 and 04-mini, hallucinated in 33% and 48% of answers, respectively, according to The Times of London. These percentages are more than double those of previous models.

Features like Google’s AI Overviews or ChatGPT summaries are also drawing clicks away from more accurate resources. Laurence O’Toole, founder of tech firm Authoritas, tracked the impact of AI Overviews and told The Times of London that when they exist click click-throughs to articles go down by 40% to 60%.

The compounding issues of AI presenting inaccurate information and causing searchers to click through to more accurate articles have many worried about efficiency and the spread of fake news.

“You spend a lot of time trying to figure out which responses are factual and which aren’t,” the chief executive of Okahu, Pratik Verma, told The New York Times last month. Okahu works with AI engineers to make the technology better, and helps companies troubleshoot issues, including hallucinations. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you,” he said.

Read Next:

Image: Shutterstock

UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets.

Get the latest stock analysis from Benzinga?

This article Rates Of Hallucination In AI Models From Google, OpenAI On The Rise originally appeared on Benzinga.com

© 2025 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIBM, ServiceNow, T-Mobile: Earnings movers
Next Article Asian shares fall after a quiet day on Wall St, but Nvidia hit by US ban on exporting AI chip
Advanced AI Editor
  • Website

Related Posts

Google Continues AI Search Push With Curated ‘Web Guide’ Results

July 25, 2025

Alphabet Stock (GOOGL) Climbs Despite Samsung’s Galactical Search for New AI Agents

July 25, 2025

Google rethinks search results with its new AI-curated ‘Web Guide’

July 24, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

Story, Stability AI collaborate to help creators make money from their work in the AI ecosystem

July 26, 2025

Basecamp Research leverages Microsoft and Nvidia AI to…

July 26, 2025

Meta Just Escalated the AI Talent War With OpenAI

July 26, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Story, Stability AI collaborate to help creators make money from their work in the AI ecosystem
  • Basecamp Research leverages Microsoft and Nvidia AI to…
  • Meta Just Escalated the AI Talent War With OpenAI
  • Accenture Becomes Inaugural Member of Corporate Affiliate Program at Stanford HAI
  • Perplexity AI Picks Bharat Over Big Tech

Recent Comments

  1. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  2. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’
  3. 打开Binance账户 on Tanka CEO Kisson Lin to talk AI-native startups at Sessions: AI
  4. Sign up to get 100 USDT on The Do LaB On Capturing Lightning In A Bottle
  5. binance Anmeldebonus on David Patterson: Computer Architecture and Data Storage | Lex Fridman Podcast #104

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.