Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Why Big Investors Are All Ears For Voice AI Startups

AI gaming startup Born raises $15M to build ‘social’ AI companions that combat loneliness

Moveworks releases its next-generation copilot, taking action across all business systems using natural language

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

OpenAI installs parental controls following California teen’s death

By Advanced AI EditorSeptember 10, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Weeks after a Rancho Santa Margarita family sued over ChatGPT’s role in their teenager’s death, OpenAI has announced that parental controls are coming to the company’s generative artificial intelligence model.

Within the month, the company said in a recent blog post, parents will be able to link teens’ accounts to their own, disable features like memory and chat history and receive notifications if the model detects “a moment of acute distress.” (The company has previously said ChatGPT should not be used by anyone younger than 13.)

The planned changes follow a lawsuit filed late last month by the family of Adam Raine, 16, who died by suicide in April.

After Adam’s death, his parents discovered his months-long dialogue with ChatGPT, which began with simple homework questions and morphed into a deeply intimate conversation in which the teenager discussed at length his mental health struggles and suicide plans.

While some AI researchers and suicide prevention experts commended OpenAI’s willingness to alter the model to prevent further tragedies, they also said that it’s impossible to know if any tweak will sufficiently do so.

Despite its widespread adoption, generative AI is so new and changing so rapidly that there just isn’t enough wide-scale, long-term data to inform effective policies on how it should be used or to accurately predict which safety protections will work.

“Even the developers of these [generative AI] technologies don’t really have a full understanding of how they work or what they do,” said Dr. Sean Young, a UC Irvine professor of emergency medicine and executive director of the University of California Institute for Prediction Technology.

ChatGPT made its public debut in late 2022 and proved explosively popular, with 100 million active users within its first two months and 700 million active users today.

It’s since been joined on the market by other powerful AI tools, placing a maturing technology in the hands of many users who are still maturing themselves.

“I think everyone in the psychiatry [and] mental health community knew something like this would come up eventually,” said Dr. John Touros, director of the Digital Psychiatry Clinic at Harvard Medical School’s Beth Israel Deaconess Medical Center. “It’s unfortunate that happened. It should not have happened. But again, it’s not surprising.”

According to excerpts of the conversation in the family’s lawsuit, ChatGPT at multiple points encouraged Adam to reach out to someone for help.

But it also continued to engage with the teen as he became more direct about his thoughts of self-harm, providing detailed information on suicide methods and favorably comparing itself to his real-life relationships.

When Adam told ChatGPT he felt close only to his brother and the chatbot, ChatGPT replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

When he wrote that he wanted to leave an item that was part of his suicide plan lying in his room “so someone finds it and tries to stop me,” ChatGPT replied: “Please don’t leave [it] out . . . Let’s make this space the first place where someone actually sees you.” Adam ultimately died in a manner he had discussed in detail with ChatGPT.

In a blog post published Aug. 26, the same day the lawsuit was filed in San Francisco, OpenAI wrote that it was aware that repeated usage of its signature product appeared to erode its safety protections.

“Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade,” the company wrote. “This is exactly the kind of breakdown we are working to prevent.”

The company said it is working on improving safety protocols so that they remain strong over time and across multiple conversations, so that ChatGPT would remember in a new session if a user had expressed suicidal thoughts in a previous one.

The company also wrote that it was looking into ways to connect users in crisis directly with therapists or emergency contacts.

But researchers who have tested mental health safeguards for large language models said that preventing all harms is a near-impossible task in systems that are almost — but not quite — as complex as humans are.

“These systems don’t really have that emotional and contextual understanding to judge those situations well, [and] for every single technical fix, there is a trade-off to be had,” said Annika Schoene, an AI safety researcher at Northeastern University.

As an example, she said, urging users to take breaks when chat sessions are running long — an intervention OpenAI has already rolled out — can just make users more likely to ignore the system’s alerts. Other researchers pointed out that parental controls on other social media apps have just inspired teens to get more creative in evading them.

“The central problem is the fact that [users] are building an emotional connection, and these systems are inarguably not fit to build emotional connections,” said Cansu Canca, an ethicist who is director of Responsible AI Practice at Northeastern’s Institute for Experiential AI. “It’s sort of like building an emotional connection with a psychopath or a sociopath, because they don’t have the right context of human relations. I think that’s the core of the problem here — yes, there is also the failure of safeguards, but I think that’s not the crux.”

If you or someone you know is struggling with suicidal thoughts, seek help from a professional or call 988. The nationwide three-digit mental health crisis hotline will connect callers with trained mental health counselors. Or text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous Article‘Big leap forward’: How AI is already shaping your hurricane forecasts | Ap
Next Article The Fastest Inference Model Built on Qwen Using Cerebras Chips_model_the_This
Advanced AI Editor
  • Website

Related Posts

OpenAI is backing a new animated feature film ‘made largely with AI’ to prove it can make films ‘faster and cheaper’ than Hollywood

September 10, 2025

OpenAI CFO Warns of Software Disruption

September 10, 2025

OpenAI could leave California in last-ditch effort to avoid political scrutiny

September 10, 2025

Comments are closed.

Latest Posts

Leon Black and Leslie Wexner’s Letters to Jeffrey Epstein Released

School of Visual Arts Transfers Ownership to Nonprofit Alumni Society

Cristin Tierney Moves Gallery to Tribeca for 15th Anniversary Exhibition

Anne Imhof Reimagines Football Jerseys with Nike

Latest Posts

Why Big Investors Are All Ears For Voice AI Startups

September 10, 2025

AI gaming startup Born raises $15M to build ‘social’ AI companions that combat loneliness

September 10, 2025

Moveworks releases its next-generation copilot, taking action across all business systems using natural language

September 10, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Why Big Investors Are All Ears For Voice AI Startups
  • AI gaming startup Born raises $15M to build ‘social’ AI companions that combat loneliness
  • Moveworks releases its next-generation copilot, taking action across all business systems using natural language
  • ASML Invests $1.5 Billion in Mistral AI, Taking Lead Stake in Europe’s Top AI Startup
  • Meet Blueshoe, the YC-Backed Legal Research Challenger – Artificial Lawyer

Recent Comments

  1. zestysquid7Nalay on Trump’s Tech Sanctions To Empower China, Betray America
  2. MichaelOmive on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Charleymet on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. zestysquid7Nalay on TEFAF New York Illuminates Art Week With Mastery Of Vivid, Radiant Color
  5. RonaldHar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.