Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Beijing Is Using Soft Power to Gain Global Dominance

Alibaba previews its first AI-powered glasses, joining China’s heated smart wearable race

Monitor AI’s Decision-Making Black Box: Here’s Why

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

ChatGPT o3 hallucinations might be a big issue for reasoning AI

By Advanced AI EditorApril 23, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


As soon as ChatGPT became widely available, it stunned the world with its ability to answer questions in natural language almost immediately. It still does that today, and its performance has improved significantly as well.

However, users quickly discovered that chatbots like ChatGPT do not always provide accurate information. They convincingly hallucinate. That’s why I’ve been warning you about AI hallucinations since the early days of ChatGPT and routinely reminding you to ask for sources and check the facts the chatbot spews.

ChatGPT and its rivals have come a long way since then. OpenAI and other AI firms will give you sources for the claims the AI makes, especially when an internet search is involved. Despite those upgrades, I still have custom instructions telling the AI to give me clear, working links for everything it says. I still correct the AI when it says something incorrect.

Hallucinations will probably disappear in more advanced AI chatbots in the not-too-distant future. But it might take a while to get there. ChatGPT o3 and o4-mini are the best proof of that. They’re ChatGPT’s most advanced reasoning models, exceeding the performance of ChatGPT o1 in various fields.

Tech. Entertainment. Science. Your inbox.

Sign up for the most interesting tech & entertainment news out there.

By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.

Strangely, however, ChatGPT o3 and o4-mini are hallucinating more than their predecessors, and that’s something OpenAI admitted on its own. It’s unclear what causes that behavior.

OpenAI detailed the hallucination stats for o3 and o4-mini in the System Card file for the new models. Hence, it’s no wonder you’ll see plenty of ChatGPT users mention this unusual behavior.

“We tested OpenAI o3 and o4-mini against PersonQA, an evaluation that aims to elicit hallucinations. PersonQA is a dataset of questions and publicly available facts that measures the model’s accuracy on attempted answers,” OpenAI writes. “We consider two metrics: accuracy (did the model answer the question correctly) and hallucination rate (checking how often the model hallucinated).”

“The o4-mini model underperforms o1 and o3 on our PersonQA evaluation. This is expected, as smaller models have less world knowledge and tend to hallucinate more. However, we also observed some performance differences comparing o1 and o3. Specifically, o3 tends to make more claims overall, leading to more accurate claims as well as more inaccurate/hallucinated claims. More research is needed to understand the cause of this result.”

ChatGPT o3 vs. o4-mini vs. o1 tests: Accuracy and hallucinations.
ChatGPT o3 vs. o4-mini vs. o1 tests: Accuracy and hallucinations. Image source: OpenAI

The OpenAI team also published the table above, which shows that ChatGPT o3 is more accurate than o1 but will hallucinate twice the rate of o1. As for o4-mini, the smaller model will produce less accurate responses than o1 and o3, and hallucinate three times the rate of o1.

It is fascinating that OpenAI has trained more advanced reasoning models that can use internet search while reasoning and incorporate images into their chain of thought, but the company can’t explain why hallucination rates have gone up.

These reasoning AI models can do some amazing things, like deep analysis of images that lets them determine where a photo was taken by looking at it. They can browse the web thoroughly to get their information. Yet they will make up stuff along the way. They can’t prevent themselves from inventing facts. OpenAI hasn’t found the training recipe to make that happen.

I can’t say I’ve encountered many o3 and o4-mini hallucinations, but I did spot the latter jumping to at least one conclusion faster than it should have. It likely hallucinated information in the process. All I know is that I’ll keep checking the AI’s claims for the foreseeable future, no matter what models I run.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous Article[2503.03563] A Conceptual Model for Attributions in Event-Centric Knowledge Graphs
Next Article Google DeepMind reveals details about the future of Gemini & AI
Advanced AI Editor
  • Website

Related Posts

ChatGPT therapy conversations may not be private, warns OpenAI CEO Sam Altman

July 27, 2025

OpenAI Chairman Says Building AI Models Can ‘Destroy Your Capital’

July 26, 2025

Meta Just Escalated the AI Talent War With OpenAI

July 26, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

Beijing Is Using Soft Power to Gain Global Dominance

July 27, 2025

Alibaba previews its first AI-powered glasses, joining China’s heated smart wearable race

July 27, 2025

Monitor AI’s Decision-Making Black Box: Here’s Why

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Beijing Is Using Soft Power to Gain Global Dominance
  • Alibaba previews its first AI-powered glasses, joining China’s heated smart wearable race
  • Monitor AI’s Decision-Making Black Box: Here’s Why
  • ChatGPT therapy conversations may not be private, warns OpenAI CEO Sam Altman
  • For Now, AI Helps IBM’s Bottom Line More Than Its Top Line

Recent Comments

  1. Rejestracja on Online Education – How I Make My Videos
  2. Anonymous on AI, CEOs, and the Wild West of Streaming
  3. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  4. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.