Editor’s Note: This article contains discussions of suicide. Reader discretion is advised. If you or someone you know is struggling with thoughts of suicide, you can find resources in your area on the National Crisis Line website or by calling 988.
(NewsNation) — ChatGPT’s parent company, OpenAI, is expected to roll out new safety features for the popular chatbot amid growing concerns over how the technology is utilized by some teens for emotional support.
The parents of 16-year-old Adam Raine have sued OpenAI, alleging the chatbot provided step-by-step instructions to their son on how he could successfully hang himself and did not try to stop him earlier this year.
Adam, who suffered from depression and loneliness, was advised by ChatGPT to tell someone about how he was feeling. Nonetheless, he attempted suicide in March for the first time. A month later, his mother found him dead in his bedroom.
ChatGPT sued after family says it encouraged teen’s suicide
On Tuesday, OpenAI announced new safety features would become available “within the next month,” allowing parents to link their account with their children’s, control how ChatGPT responds with age-appropriate model behavior rules, and receive notifications when it detects a user is in a moment of acute distress. In addition, parents will be able to manage which features to disable, including memory and chat history.
“We’ve seen people turn to it in the most difficult of moments,” OpenAI said. “That’s why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input.”
Woman dies by suicide after using AI therapist
Although the company didn’t refer to Adam’s death or the lawsuit specifically, the company acknowledged users who turn to ChatGPT during difficult moments. As part of a 120-day safety initiative, the company said it is partnering with a council of experts in youth development, mental health and human-computer interaction to help strengthen its protections for teens.
“Their input will help us define and measure well-being, set priorities, and design future safeguards — such as future iterations of parental controls — with the latest research in mind,” OpenAI said.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
For the latest news, weather, sports, and streaming video, head to NewsNation.