Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Grok Imagine AI is now free for all users, generate videos with Spicy Mode

CMS Germany Rolls Out ClauseBuddy To All Lawyers – Artificial Lawyer

OmniTry: Virtual Try-On Anything without Masks – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

OpenAI Takes Calculated Move To Navigate Treacherous Waters Of ChatGPT AI Giving Out Mental Health Advice

By Advanced AI EditorAugust 8, 2025No Comments11 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Smart Young Students Studying in University with Diverse Multiethnic Classmates. Scholars Collaborate in College Room on Computer Science Project, Writing Software Code in Successful Teamwork.

Newly announced changes to ChatGPT are intended to make the AI better at mental health guidance, but there are lots of potential gotchas ahead.

getty

In today’s column, I examine the newly announced changes to ChatGPT by OpenAI that are being undertaken to navigate the treacherous waters of AI producing mental health advice.

I say that this is a treacherous predicament because generative AI and large language models (LLMs) can potentially produce foul psychological advice and even generate therapeutically harmful guidance. The difficulty for AI makers is that, on the one hand, they relish that users are flocking to AI for mental health insights, but at the same time, the AI makers desperately want to avoid legal liability and reputational ruin if their AI spurs people toward mental ruin rather than mental well-being.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

OpenAI Latest Announcement

An announcement on August 4, 2025, by OpenAI entitled “What we’re optimizing ChatGPT for” provides intriguing clues as to the business and societal struggles of allowing generative AI and LLMs to provide mental health advice. I will, in a moment, walk you through the key points and explain the backstory on what is really going on.

Here is some important context.

To begin with, it’s important to recognize that one of the most popular ways people are using generative AI today is for addressing mental health concerns, see my coverage at the link here. This trend is understandable. Generative AI is widely accessible, typically low-cost or free, and available around the clock. Anyone feeling mentally unsettled or concerned can simply log in and start a conversation, any time of day or night. This is in sharp contrast to the effort and expense involved in seeing a human therapist.

Next, generative AI is willing to engage on mental health topics endlessly.

There are essentially no time limits, no hesitation, no steep hourly rates ticking away. In fact, these systems are often designed to be highly affirming and agreeable, sometimes to the point of excessive flattery. I’ve pointed out before that this kind of overly supportive interaction can sidestep the harder truths and constructive confrontation that are sometimes essential in genuine mental health counseling (see my analysis at the link here).

Finally, the companies behind these AI systems are in a tricky spot. By allowing their models to respond to mental health issues, they’re potentially exposing themselves to serious legal risks and reputational damage. This is especially the case if the AI dispenses unhelpful or harmful advice. Thus far, they’ve managed to avoid any major public fallout, but the danger is constantly looming as these LLMs continue to be used in quasi-therapeutic roles.

AI Makers Making Do

You might wonder why the AI makers don’t just shut off the capability of their AI to produce mental health insights. That would solve the problem of the business exposures involved. Well, as noted above, this is the top attractor for people to use generative AI. It would be usurping the cash cow, or like capping an oil well that is gushing out liquid gold.

An imprudent strategy.

The next best thing to do is to attempt to minimize the risks and hope that the gusher can keep flowing.

One aspect that the AI makers have already undertaken is to emphasize in their online licensing agreements that users aren’t supposed to use the AI for mental health advice, see my coverage at the link here. The aim is that by telling users not to use the AI in this manner, perhaps the AI maker can shield itself from adverse exposure. The thing is, despite the warnings, the AI makers often do whatever they can to essentially encourage or support the use of their AI for this claimed-to-be don’t use capacity.

Some would insist this is a wink-wink of trying to play both sides of the gambit at the same time, see my discussion at the link here.

In any case, AI makers are cognizant that since they are allowing their AI to be used for therapy, they ought to try and keep the AI somewhat in check. This might minimize their risks or at least be later evidence that they made a yeoman’s effort to do the right thing. Meanwhile, they can hold their head high in taking overt steps to seemingly reduce the potential for harm and improve the chances of being beneficial.

ChatGPT Latest Changes

In the OpenAI official blog posting “What we’re optimizing ChatGPT for,” that was posted on August 4, 2025, these notable points were made (excerpts):

“We build ChatGPT to help you thrive in all the ways you want.”
“We don’t always get it right. Earlier this year, an update made the model too agreeable, sometimes saying what sounded nice instead of what was actually helpful. We rolled it back, changed how we use feedback, and are improving how we measure real-world usefulness over the long term, not just whether you liked the answer in the moment.”
“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency. While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”
“When you ask something like ‘Should I break up with my boyfriend?’ ChatGPT shouldn’t give you an answer. It should help you think it through — asking questions, weighing pros and cons. New behavior for high-stakes personal decisions is rolling out soon.”
“We’re convening an advisory group of experts in mental health, youth development, and HCI. This group will help ensure our approach reflects the latest research and best practices.”

Let’s go ahead and unpack some of these points.

Detecting Mental Health Signs

A frequent criticism of using AI to perform mental health therapy is that the AI might computationally overlook serious signs exhibited by a user, perhaps involving delusions, dependencies, and other crucial cognitive conditions.

For example, a user might say that they are interested in learning about depression, and at the same time be asking about ways that people have tried to hurt themselves. A human therapist would almost certainly pick up on the dark undertones and want to explore the potential for self-harm. AI might not make that kind of human behavioral connection and blab endlessly on the topics without catching the drift of the intentions involved.

Even if the AI does manage to discern a potential issue, the question arises as to what the AI should do about it.

Some would argue that the AI should immediately report the person to mental health authorities. The problem there is that a ton of false reports are bound to be made. It could be a tsunami of misreports. In addition, people are going to be steamed that the AI snitched on them, especially if they were innocently chatting and their efforts had no bearing on anything of a dire nature.

Another angle is that perhaps the AI should engage the user in a dialogue about their possible mental health condition. The problem there is that the AI is essentially rendering a therapeutic diagnosis. Again, this could be erroneous. In addition, people who aren’t in that plight would undoubtedly be upset that the AI has turned in that direction. A strident case could be made that the AI might be causing greater mental disturbance than being avidly helpful.

Here’s where things are.

Research is underway to improve how AI makes these kinds of tough calls and does so in a sensible, evidence-based fashion, see my discussion at the link here. Step by step, it is anticipated that advances in AI for mental health will gradually diminish the false positives and false negatives and be a reliable and steady diagnostic tool.

Asking Questions Vs. Giving Answers

The way that generative AI and LLMs have been shaped by AI makers is to generally give a user an answer, handing the answer to them on a silver platter. The belief is that users do not want to slog through a series of back-and-forth entreaties. They want the bottom line.

If a user asks a question, quickly and without hesitation, plop out an answer.

An issue with this approach is that the AI might not have sufficient info from the user to produce an adequate answer. There is a tradeoff involved. An AI that asks a lot of follow-up questions to a given question is likely to be irritating to users. The users will opt to use some other AI. They expect a fast-food delivery of answers. Period, end of story.

In the case of mental health circumstances, it is rare that a one-and-done approach will be suitable. Think about how therapists work. They use what is commonly known as talk therapy. They talk with a client or patient to get the needed details. The aim is not to just rush to judgment. But AI is programmed by AI makers to purposely rush to judgment in order to spit out an answer to any given question.

For more on the similarities and differences between AI-based therapy and the work of human therapists, see my analysis at the link here.

We are gradually witnessing a bit of a shift in the philosophy and business practice of having AI provide instantaneous knee-jerk responses. OpenAI recently released its ChatGPT Study Mode, a feature that allows learners and students to be led incrementally by AI toward arriving at answers. A complaint about AI used by students is that the students don’t do any of the heavy lifting. The idea is that users can activate AI into a teaching mode and, ergo, no longer be trapped in the instant oatmeal conundrum. For the details of how the learner-oriented ChatGPT Study Mode works, see my explanation at the link here.

The same overarching principles of collaborative interactivity can readily be applied to mental health guidance. When a user says they have this or that mental concern, the AI doesn’t necessarily have to leap to an abrupt conclusion. Instead, the AI can shift into a more interactive mode of trying to adroitly ferret out more details and remain engaging and informative.

This is referred to in the AI field as enabling new behaviors for AI that are particularly fruitful in high-stakes situations, such as when someone is faced with a mental health concern or other paramount personal decision. You can expect to see the major AI makers increasingly adopting these kinds of add-on or supplemental innovative approaches into their LLMs.

Mental Health Booster Or Buster

An ongoing debate entails whether the use of AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome.

We are immersed in a grand experiment right now. Millions and possibly billions of people across the globe are turning to everyday AI to get their mental health concerns aired and possibly seeking cures accordingly. Everyone is a guinea pig in an unguided and wanton impactful experiment.

What does this bode for the near-term mental health of the population? And what about the longer-term impacts on the human mind and societal interactions? See my analysis of the potential widespread impacts at the link here.

If AI can do a proper job on this heady task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to AI is generally plentiful in comparison. It could be that AI for mental health will greatly benefit the mental status of humankind. A dour counterargument is that AI might be the worst destroyer of mental health in the history of humanity. Boom, drop the mic.

Nobody knows for sure, and only time will tell.

As per the famous words of Marcus Aurelius: “The happiness of your life depends upon the quality of your thoughts.” Whether AI and our shaping of AI will take us in the proper direction is the zillion-dollar question of the hour.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIndia logs highest global data breach costs in 2025 at ₹220 million: IBM
Next Article Meta acquires AI audio startup WaveForms to boost emotional voice technology
Advanced AI Editor
  • Website

Related Posts

OpenAI’s GPT-5 Now Generally Available on Microsoft Azure AI Foundry

August 20, 2025

OpenAI introduces cheapest ChatGPT plan costing $4.6 month

August 20, 2025

OpenAI’s Sam Altman sees AI bubble forming as industry spending surges

August 20, 2025

Comments are closed.

Latest Posts

Barbara Hepworth Sculpture Will Remain in UK After £3.8 M. Raised

After 12-Year Hiatus, Egypt’s Alexandria Biennale Will Return

Ai Weiwei Visits Ukraine’s Front Line Ahead of Kyiv Installation

Maren Hassinger to Receive Her Largest Retrospective to Date Next Year

Latest Posts

Grok Imagine AI is now free for all users, generate videos with Spicy Mode

August 20, 2025

CMS Germany Rolls Out ClauseBuddy To All Lawyers – Artificial Lawyer

August 20, 2025

OmniTry: Virtual Try-On Anything without Masks – Takara TLDR

August 20, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Grok Imagine AI is now free for all users, generate videos with Spicy Mode
  • CMS Germany Rolls Out ClauseBuddy To All Lawyers – Artificial Lawyer
  • OmniTry: Virtual Try-On Anything without Masks – Takara TLDR
  • AI Isn’t Coming for Hollywood. It Has Already Arrived
  • China’s DeepSeek launches V3.1, raising stakes for enterprise AI adoption – Computerworld

Recent Comments

  1. ChrisStits on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. RobertLog on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Charliecep on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Albertanync on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. ChrisStits on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.