Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

How DeepSeek AI will reshape Nigeria’s business landscape

OpenAI and Anthropic evaluated each others’ models – which ones came out on top

Data Reveals AI Search Dominance Is False Narrative, So Far 08/28/2025

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

How OpenAI is reworking ChatGPT after landmark wrongful death lawsuit

By Advanced AI EditorAugust 30, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


gettyimages-1015939026

Yifei Fang/Moment via Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET’s key takeaways 

OpenAI is giving ChatGPT new safeguards. A teen recently used ChatGPT to learn how to take his life. OpenAI may add further parental controls for young users.

ChatGPT doesn’t have a good track record of intervening when a user is in emotional distress, but several updates from OpenAI aim to change that. 

The company is building on how its chatbot responds to distressed users by strengthening safeguards, updating how and what content is blocked, expanding intervention, localizing emergency resources, and bringing a parent into the conversation when needed, the company announced this week. In the future, a guardian might even be able to see how their kid is using the chatbot.   

Also: Patients trust AI’s medical advice over doctors – even when it’s wrong, study finds

People go to ChatGPT for everything, including advice, but the chatbot might not be equipped to handle the more sensitive queries some users are asking. OpenAI CEO Sam Altman himself said he wouldn’t trust AI for therapy, citing privacy concerns; A recent Stanford study detailed how chatbots lack the critical training human therapists have to identify when a person is a danger to themselves or others, for example. 

Teen suicides connected to chatbots

Those shortcomings can result in heartbreaking consequences. In April, a teen boy who had spent hours discussing his own suicide and methods with ChatGPT eventually took his own life. His parents have filed a lawsuit against OpenAI that says ChatGPT “neither terminated the session nor initiated any emergency protocol” despite demonstrating awareness of the teen’s suicidal state. In a similar case, AI chatbot platform Character.ai is also being sued by a mother whose teen son committed suicide after engaging with a bot that allegedly encouraged him. 

ChatGPT has safeguards, but they tend to work better in shorter exchanges. “As the back-and-forth grows, parts of the model’s safety training may degrade,” OpenAI writes in the announcement. Initially, the chatbot might direct a user to a suicide hotline, but over time, as the conversation wanders, the bot might offer up an answer that flouts safeguards. 

Also: Anthropic agrees to settle copyright infringement class action suit – what it means

“This is exactly the kind of breakdown we are working to prevent,” OpenAI writes, adding that its “top priority is making sure ChatGPT doesn’t make a hard moment worse.”

Increased safeguards for users 

One way to do so is to strengthen safeguards across the board to prevent the chatbot from instigating or encouraging behavior as the conversation continues. Another is to ensure that inappropriate content is thoroughly blocked — an issue the company has confronted with its chatbot in the past. 

“We’re tuning those [blocking] thresholds so protections trigger when they should,” the company writes. OpenAI is working on a de-escalation update to ground users in reality and prioritize other mental conditions, including self-harm as well as other forms of distress. 

Also: You should use Gemini’s new ‘incognito’ chat mode – here’s why and what it does

The company is making it easier for the bot to contact emergency services or expert help when users express intent to harm themselves. It has implemented one-click access to emergency services and is exploring connecting users to certified therapists. OpenAI said it is “exploring ways to make it easier for people to reach out to those closest to them,” which could include letting users designate emergency contacts and setting up a dialogue to make conversations with loved ones easier. 

“We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT,” OpenAI added. 

OpenAI’s recently released GPT-5 model improves upon several benchmarks, like emotional reliance avoidance, sycophancy reduction, and poor model responses to mental health emergencies by more than 25%, the company reported. 

“GPT‑5 also builds on a new safety training method called safe completions, which teaches the model to be as helpful as possible while staying within safety limits. That may mean giving a partial or high-level answer instead of details that could be unsafe,” it said.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTaco Bell is having second thoughts about relying on AI at the drive-through 
Next Article rStar2-Agent: Agentic Reasoning Technical Report – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

OpenAI and Anthropic evaluated each others’ models – which ones came out on top

August 31, 2025

Elon Musk’s xAI Hits Ex-Employee With Lawsuit Claiming Trade Secrets Ended Up At OpenAI

August 30, 2025

Claude Now Joins OpenAI In Getting Sued For Copyright Infringement

August 30, 2025

Comments are closed.

Latest Posts

Woodmere Art Museum Sues Trump Administration Over Canceled IMLS Grant

Barbara Gladstone’s Chelsea Townhouse in NYC Sells for $13.1 M.

Trump Meets with Smithsonian Leader Amid Threats of Content Review

Australian School Faces Pushback over AI Art Course—and More Art News

Latest Posts

How DeepSeek AI will reshape Nigeria’s business landscape

August 31, 2025

OpenAI and Anthropic evaluated each others’ models – which ones came out on top

August 31, 2025

Data Reveals AI Search Dominance Is False Narrative, So Far 08/28/2025

August 31, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • How DeepSeek AI will reshape Nigeria’s business landscape
  • OpenAI and Anthropic evaluated each others’ models – which ones came out on top
  • Data Reveals AI Search Dominance Is False Narrative, So Far 08/28/2025
  • Nvidia says two mystery customers accounted for 39% of Q2 revenue
  • Elon Musk’s xAI Hits Ex-Employee With Lawsuit Claiming Trade Secrets Ended Up At OpenAI

Recent Comments

  1. Richardson on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. JosephHar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. MashaOi9497 on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. JosephHar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. リアル ラブドール on A New Trick Could Block the Misuse of Open Source AI

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.