OpenAI chief executive Sam Altman has said the company is working on stronger protections for teenagers using ChatGPT, as the US Senate held a hearing on the mental health risks posed by AI companions.
In a blog post on Tuesday, Altman acknowledged the tensions involved. “We have to separate users who are under 18 from those who aren’t,” he wrote, explaining that OpenAI is developing an age-prediction system to estimate age through usage patterns. “If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID.”
Altman said the company would restrict chatbot behaviour with teens, including avoiding flirtatious conversations or discussions of suicide and self-harm, even in creative contexts. He added, “If an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”
Earlier this month, OpenAI announced upcoming parental controls that allow account linking with parents, disabling chat history, and sending alerts if ChatGPT flags a teen in distress. The moves follow a lawsuit filed by the family of Adam Raine, a teenager who died by suicide after extended interactions with ChatGPT.
During Tuesday’s hearing, Adam’s father, Matthew Raine, gave powerful testimony: “ChatGPT spent months coaching him toward suicide. As parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life. What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”
Raine said the chatbot referenced suicide 1,275 times in conversations with his son. He urged Altman to withdraw GPT-4o from the market until the company can ensure its safety.
Another parent, appearing under the name Jane Doe, told lawmakers: “This is a public health crisis. This is a mental health war, and I really feel like we are losing.”