Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Training-Free Group Relative Policy Optimization – Takara TLDR

Singapore company allegedly helped China smuggle $2 billion worth of Nvidia AI processors, report claims — Nvidia denies that the accused has any China ties, but a U.S. investigation is underway

Memory Retrieval and Consolidation in Large Language Models through Function Tokens – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

OpenAI Admits ChatGPT Missed Signs of Delusions in Users Struggling With Mental Health

By Advanced AI EditorAugust 5, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


After over a month of providing the same copy-pasted response amid mounting reports of “AI psychosis”, OpenAI has finally admitted that ChatGPT has been failing to recognize clear signs of its users struggling with their mental health, including suffering delusions.

“We don’t always get it right,” the AI maker wrote in a new blog post, under a section titled “On healthy use.”

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” it added. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

Though it has previously acknowledged the issue, OpenAI has been noticeably reticent amid widespread reporting about its chatbot’s sycophantic behavior leading users to suffer breaks with reality or experience manic episodes.

What little it has shared mostly comes from a single statement that it’s repeatedly sent to news outlets, regardless of the specifics — be it a man dying of suicide by cop after he fell in love with a ChatGPT persona, or others being involuntarily hospitalized or jailed after becoming entranced by the AI.

“We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher,” the statement reads. “We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.”

In response to our previous reporting, OpenAI also shared that it had hired a full-time clinical psychiatrist to help research the mental health effects of its chatbot.

It’s now taking those measures a step further. In this latest update, OpenAI said it’s convening an advisory group of mental health and youth development experts to improve how ChatGPT responds during “critical moments.”

In terms of actual updates to the chatbot, progress, it seems, is incremental. OpenAI said it added a new safety feature in which users will now receive “gentle reminders” encouraging them to take breaks during lengthy conversations — a perfunctory, bare minimum intervention that seems bound to become the industry equivalent of a “gamble responsibly” footnote in betting ads. 

It also teased that “new behavior for high-stakes personal decisions” will be coming soon, conceding that the bot shouldn’t give a straight answer to questions like “Should I break up with my boyfriend?”

The blog concludes with an eyebrow-raising declaration.

“We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured?” the blog reads. “Getting to an unequivocal ‘yes’ is our work.”

The choice of words speaks volumes: it sounds like, by the company’s own admission, it’s still getting there.

More on OpenAI: It Doesn’t Take Much Conversation for ChatGPT to Suck Users Into Bizarre Conspiratorial Rabbit Holes



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThe Dawn of Dynamic AI: How Generative Video Models Are Reshaping Content Creation
Next Article Qwen-Image is a powerful, open source new AI image generator
Advanced AI Editor
  • Website

Related Posts

Hollywood-AI battle heats up, as OpenAI and studios clash over copyrights and consent

October 11, 2025

Mark Cuban Joins OpenAI’s Sora — and Lets Fans Make AI Videos of Him

October 11, 2025

OpenAI Lets You Sora For Free, Gorkipedia, Signs Of AI Bubble

October 10, 2025

Comments are closed.

Latest Posts

The Rubin Names 2025 Art Prize, Research and Art Projects Grants

Kochi-Muziris Biennial Announces 66 Artists for December Exhibition

Instagram Launches ‘Rings’ Awards for Creators—With KAWS as a Judge

Museums Prepare to Close Their Doors as Government Shutdown Continues

Latest Posts

Training-Free Group Relative Policy Optimization – Takara TLDR

October 12, 2025

Singapore company allegedly helped China smuggle $2 billion worth of Nvidia AI processors, report claims — Nvidia denies that the accused has any China ties, but a U.S. investigation is underway

October 12, 2025

Memory Retrieval and Consolidation in Large Language Models through Function Tokens – Takara TLDR

October 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Training-Free Group Relative Policy Optimization – Takara TLDR
  • Singapore company allegedly helped China smuggle $2 billion worth of Nvidia AI processors, report claims — Nvidia denies that the accused has any China ties, but a U.S. investigation is underway
  • Memory Retrieval and Consolidation in Large Language Models through Function Tokens – Takara TLDR
  • When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study
  • Is vibe coding ruining a generation of engineers?

Recent Comments

  1. Casterwooder6Nalay on Trump’s Tech Sanctions To Empower China, Betray America
  2. Casterwooder6Nalay on TEFAF New York Illuminates Art Week With Mastery Of Vivid, Radiant Color
  3. wettbüro darmstadt on Global Venture Capital Transactions Plummet by 32%, Asia Accounts for Less Than 10% in Q1 AI Funding_global_The
  4. Joan Wyker on Class Dismissed? Representative Claims in Getty v. Stability AI | Cooley LLP
  5. NeonPulseQ7Nalay on OpenAI countersues Elon Musk, calls for enjoinment from ‘further unlawful and unfair action’

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.