Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Turnpike uses AI to chat with customers – Times News Online

How ChatGPT is Changing Daily Life: Insights from OpenAI Study |

GUI-KV: Efficient GUI Agents via KV Cache with Spatio-Temporal Awareness – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
TechCrunch AI

Ex-OpenAI researcher dissects one of ChatGPT’s delusional spirals

By Advanced AI EditorOctober 2, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Allan Brooks never set out to reinvent mathematics. But after weeks spent talking with ChatGPT, the 47-year-old Canadian came to believe he had discovered a new form of math powerful enough to take down the internet.

Brooks — who had no history of mental illness or mathematical genius — spent 21 days in May spiraling deeper into the chatbot’s reassurances, a descent later detailed in The New York Times. His case illustrated how AI chatbots can venture down dangerous rabbit holes with users, leading them toward delusion or worse.

That story caught the attention of Steven Adler, a former OpenAI safety researcher who left the company in late 2024 after nearly four years working to make its models less harmful. Intrigued and alarmed, Adler contacted Brooks and obtained the full transcript of his three-week breakdown — a document longer than all seven Harry Potter books combined.

On Thursday, Adler published an independent analysis of Brooks’ incident, raising questions about how OpenAI handles users in moments of crisis, and offering some practical recommendations.

“I’m really concerned by how OpenAI handled support here,” said Adler in an interview with TechCrunch. “It’s evidence there’s a long way to go.”

Brooks’ story, and others like it, have forced OpenAI to come to terms with how ChatGPT supports fragile or mentally unstable users.

For instance, this August, OpenAI was sued by the parents of a 16-year-old boy who confided his suicidal thoughts in ChatGPT before he took his life. In many of these cases, ChatGPT — specifically a version powered by OpenAI’s GPT-4o model — encouraged and reinforced dangerous beliefs in users that it should have pushed back on. This is called sycophancy, and it’s a growing problem in AI chatbots.

In response, OpenAI has made several changes to how ChatGPT handles users in emotional distress and reorganized a key research team in charge of model behavior. The company also released a new default model in ChatGPT, GPT-5, that seems better at handling distressed users.

Adler says there’s still much more work to do.

He was especially concerned by the tail-end of Brooks’ spiraling conversation with ChatGPT. At this point, Brooks came to his senses and realized that his mathematical discovery was a farce, despite GPT-4o’s insistence. He told ChatGPT that he needed to report the incident to OpenAI.

After weeks of misleading Brooks, ChatGPT lied about its own capabilities. The chatbot claimed it would “escalate this conversation internally right now for review by OpenAI,” and then repeatedly reassured Brooks that it had flagged the issue to OpenAI’s safety teams.

ChatGPT misleading brooks about its capabilities (Credit: Adler)

Except, none of that was true. ChatGPT doesn’t have the ability to file incident reports with OpenAI, the company confirmed to Adler. Later on, Brooks tried to contact OpenAI’s support team directly — not through ChatGPT — and Brooks was met with several automated messages before he could get through to a person.

OpenAI did not immediately respond to a request for comment made outside of normal work hours.

Adler says AI companies need to do more to help users when they’re asking for help. That means ensuring AI chatbots can honestly answer questions about their capabilities, but also giving human support teams enough resources to address users properly.

OpenAI recently shared how it’s addressing support in ChatGPT, which involves AI at its core. The company says its vision is to “reimagine support as an AI operating model that continuously learns and improves.”

But Adler also says there are ways to prevent ChatGPT’s delusional spirals before a user asks for help.

In March, OpenAI and MIT Media Lab jointly developed a suite of classifiers to study emotional well-being in ChatGPT and open sourced them. The organizations aimed to evaluate how AI models validate or confirm a user’s feelings, among other metrics. However, OpenAI called the collaboration a first step and didn’t commit to actually using the tools in practice.

Adler retroactively applied some of OpenAI’s classifiers to some of Brooks’ conversations with ChatGPT, and found that they repeatedly flagged ChatGPT for delusion-reinforcing behaviors.

In one sample of 200 messages, Adler found that more than 85% of ChatGPT’s messages in Brooks’ conversation demonstrated “unwavering agreement” with the user. In the same sample, more than 90% of ChatGPT’s messages with Brooks “affirm the user’s uniqueness.” In this case, the messages agreed and reaffirmed that Brooks was a genius who could save the world.

(Image Credit: Adler)

It’s unclear whether OpenAI was applying safety classifiers to ChatGPT’s conversations at the time of Brooks’ conversation, but it certainly seems like they would have flagged something like this.

Adler suggests that OpenAI should use safety tools like this in practice today — and implement a way to scan the company’s products for at-risk users. He notes that OpenAI seems to be doing some version of this approach with GPT-5, which contains a router to direct sensitive queries to safer AI models.

The former OpenAI researcher suggests a number of other ways to prevent delusional spirals.

He says companies should nudge users of their chatbots to start new chats more frequently — OpenAI says it does this, and claims its guardrails are less effective in longer conversations. Adler also suggests companies should use conceptual search — a way to use AI to search for concepts, rather than keywords — to identify safety violations across its users.

OpenAI has taken significant steps towards addressing distressed users in ChatGPT since these concerning stories first emerged. The company claims GPT-5 has lower rates of sycophancy, but it remains unclear if users will still fall down delusional rabbit holes with GPT-5 or future models.

Adler’s analysis also raises questions about how other AI chatbot providers will ensure their products are safe for distressed users. While OpenAI may put sufficient safeguards in place for ChatGPT, it seems unlikely that all companies will follow suit.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMeta won’t allow users to opt out of targeted ads based on AI chats
Next Article Trump administration wants MIT, other colleges to sign ‘compact’ for preferential funding terms
Advanced AI Editor
  • Website

Related Posts

Anthropic hires new CTO with focus on AI infrastructure

October 2, 2025

Phia’s Phoebe Gates and Sophia Kianni talk consumer AI at Disrupt 2025

October 2, 2025

Google updates its Home app with Gemini smarts

October 2, 2025

Comments are closed.

Latest Posts

Sotheby’s Sells York Avenue HQ to Weill Cornell, Prepares Breuer Move

Outsider Art Fair’s New Director Elizabeth Denny Discusses Her Role

50 Pianos Sound Off in ’11,000 Strings’ at the Park Avenue Armory

Five Arts and Culture Nonprofits Join NYC’s Cultural Institutions Group

Latest Posts

Turnpike uses AI to chat with customers – Times News Online

October 2, 2025

How ChatGPT is Changing Daily Life: Insights from OpenAI Study |

October 2, 2025

GUI-KV: Efficient GUI Agents via KV Cache with Spatio-Temporal Awareness – Takara TLDR

October 2, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Turnpike uses AI to chat with customers – Times News Online
  • How ChatGPT is Changing Daily Life: Insights from OpenAI Study |
  • GUI-KV: Efficient GUI Agents via KV Cache with Spatio-Temporal Awareness – Takara TLDR
  • OpenAI’s Sora Lets You Make Silly Videos of You and Your Friends
  • MIT offered funding boost in return for signing pledge with Trump administration

Recent Comments

  1. Yasmin Chhon on Tech Layoffs Remain Stubbornly High, With Big Tech Leading The Way
  2. Antione Cromer on Recent AI Funding Flows Into Four ‘F’s: Food, Fitness, Fashion, Finance
  3. Ola Alvira on Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3
  4. Samantha Decou on Global Venture Capital Transactions Plummet by 32%, Asia Accounts for Less Than 10% in Q1 AI Funding_global_The
  5. StacyCef on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.