OpenAI will soon roll out new features within its ChatGPT AI model that give parents more control over their children’s interaction with the chatbot, according to a blog post by the AI giant.
The Sam Altman-led company says that within the next month, parents will be able to:
Link their account with their teen’s account (minimum age of 13) through an email invitation
Control how ChatGPT responds to their teen with age-appropriate model behavior rules, which will be switched on by default
Manage which features to disable, including memory and chat history
Receive notifications when the system detects their teen is in a moment of “acute distress”
Notably, experts will guide the “acute distress” feature in order to foster trust between parents and their teenaged children.
“These steps are only the beginning. We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” the OpenAI blog post reads
It is pertinent to note that the company’s latest move comes in light of a wrongful death lawsuit filed in the US by the parents of a 16-year-old.
The parents claim that ChatGPT provided their son with detailed self-harm instructions, validated his suicidal thoughts, discouraged him from seeking help, and ultimately enabled his death by suicide in April 2025.
What Is OpenAI Doing To Help With Mental Health Issues?
In its latest blog post, OpenAI says it will collaborate with an “Expert Council on Well-Being” to measure well-being, set priorities, and design future safeguards with the “latest research in mind.”
“The council’s role is to shape a clear, evidence-based vision for how AI can support people’s well-being and help them thrive,” the blog post notes.
“While the council will advise on our product, research, and policy decisions, OpenAI remains accountable for the choices we make,” it adds.
Furthermore, OpenAI says it will work alongside a worldwide group of physicians to streamline its safety research, AI model training, and other interventions.
“More than 90 physicians across 30 countries—including psychiatrists, pediatricians, and general practitioners—have already contributed to our research on how our models should behave in mental health contexts,” the post states.
“We are adding even more clinicians and researchers to our network, including those with deep expertise in areas like eating disorders, substance use, and adolescent health,” OpenAI adds.
OpenAI’s Admissions About ChatGPT’s Errors: A Brief History
This latest policy update comes after OpenAI admitted that safeguards built into its AI system might not work during longer conversations.
Advertisements
The company explained that while ChatGPT may correctly point to a suicide hotline in the early stages of a conversation, “after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”
“Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade,” a blog post dated August 26 reads.
Notably, Altman himself addressed the mental health implications of using ChatGPT in an X (formerly Twitter) post last month.
He emphasized that OpenAI does not want AI models like ChatGPT to reinforce delusion, self-destructive methods, or a mentally fragile state.
“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy,” Altman wrote.
“But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive,” he added.
Also Read:
Support our journalism: