Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Black Tech Street partners with NVIDIA to bring AI revolution to Tulsa

Tesla deploys Unsupervised FSD in Europe for the first time—with a twist

AI Sector In Q2 2025 Sees Record M&A, Surging Valuations, Rise Of AI Agents : Research

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

OpenAI Adds Teen Safety Features to ChatGPT

By Advanced AI EditorSeptember 4, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI will soon roll out new features within its ChatGPT AI model that give parents more control over their children’s interaction with the chatbot, according to a blog post by the AI giant.

The Sam Altman-led company says that within the next month, parents will be able to:

Link their account with their teen’s account (minimum age of 13) through an email invitation

Control how ChatGPT responds to their teen with age-appropriate model behavior rules, which will be switched on by default

Manage which features to disable, including memory and chat history

Receive notifications when the system detects their teen is in a moment of “acute distress”

Notably, experts will guide the “acute distress” feature in order to foster trust between parents and their teenaged children.

“These steps are only the beginning. We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” the OpenAI blog post reads

It is pertinent to note that the company’s latest move comes in light of a wrongful death lawsuit filed in the US by the parents of a 16-year-old.

The parents claim that ChatGPT provided their son with detailed self-harm instructions, validated his suicidal thoughts, discouraged him from seeking help, and ultimately enabled his death by suicide in April 2025.

What Is OpenAI Doing To Help With Mental Health Issues?

In its latest blog post, OpenAI says it will collaborate with an “Expert Council on Well-Being” to measure well-being, set priorities, and design future safeguards with the “latest research in mind.”

“The council’s role is to shape a clear, evidence-based vision for how AI can support people’s well-being and help them thrive,” the blog post notes.

“While the council will advise on our product, research, and policy decisions, OpenAI remains accountable for the choices we make,” it adds.

Furthermore, OpenAI says it will work alongside a worldwide group of physicians to streamline its safety research, AI model training, and other interventions.

“More than 90 physicians across 30 countries—including psychiatrists, pediatricians, and general practitioners—have already contributed to our research on how our models should behave in mental health contexts,” the post states.

“We are adding even more clinicians and researchers to our network, including those with deep expertise in areas like eating disorders, substance use, and adolescent health,” OpenAI adds.

OpenAI’s Admissions About ChatGPT’s Errors: A Brief History

This latest policy update comes after OpenAI admitted that safeguards built into its AI system might not work during longer conversations.

Advertisements

The company explained that while ChatGPT may correctly point to a suicide hotline in the early stages of a conversation, “after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”

“Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade,” a blog post dated August 26 reads.

Notably, Altman himself addressed the mental health implications of using ChatGPT in an X (formerly Twitter) post last month.

If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…

— Sam Altman (@sama) August 11, 2025

He emphasized that OpenAI does not want AI models like ChatGPT to reinforce delusion, self-destructive methods, or a mentally fragile state.

“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy,” Altman wrote.

“But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive,” he added.

Also Read:

Support our journalism:

For You



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle’s NotebookLM now lets you customize the tone of its AI podcasts
Next Article Robix: A Unified Model for Robot Interaction, Reasoning and Planning – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

Bill Gurley Signals Caution As OpenAI Expands $10.3 Billion Share Sale: You Shouldn’t Be ‘Buying’ If You Can’t Answer This Question – Meta Platforms (NASDAQ:META), NVIDIA (NASDAQ:NVDA)

September 4, 2025

OpenAI Adds New Safety Protections for Teens After Lawsuit

September 3, 2025

OpenAI announces new safety features amid mental health lawsuit

September 3, 2025

Comments are closed.

Latest Posts

Nazi-Looted Painting from Argentine Home May Have Been Recovered

Moche Residence Unearthed at Archaeological Site in Northern Peru

Kim Sajet to Helm the Milwaukee Art Museum

GalaxyCon LLC Announces Sweeping AI Art Ban

Latest Posts

Black Tech Street partners with NVIDIA to bring AI revolution to Tulsa

September 4, 2025

Tesla deploys Unsupervised FSD in Europe for the first time—with a twist

September 4, 2025

AI Sector In Q2 2025 Sees Record M&A, Surging Valuations, Rise Of AI Agents : Research

September 4, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Black Tech Street partners with NVIDIA to bring AI revolution to Tulsa
  • Tesla deploys Unsupervised FSD in Europe for the first time—with a twist
  • AI Sector In Q2 2025 Sees Record M&A, Surging Valuations, Rise Of AI Agents : Research
  • Global Startup Funding In August Fell To Lowest Monthly Total In 8 Years As Seed And Late-Stage Investors Retreated
  • Karen Hao on the making of a $90B AI empire

Recent Comments

  1. Juniorfar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. KevinVen on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. RobertIdody on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. inboxdollars on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. filmswen on 1 Surging Stock with Promising Prospects and 2 to Keep Off Your Radar

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.