OpenAI has revealed its plans to integrate new parental controls to its AI chatbot, ChatGPT, following a lawsuit filed by parents of a 16-year-old who died by suicide.
The bereaved parents have alleged that the AI chatbot contributed to their child’s untimely death, and OpenAI has vowed to make it better for all users, especially teens.
OpenAI’s ChatGPT Parental Controls Are Coming
OpenAI shared a new announcement revealing their latest plans to add parental controls to ChatGPT to help families navigate personal conflicts and crises. These parental controls that OpenAI will add will serve as a tool to help parents get more meaningful insight on how their children, particularly teens, use ChatGPT.
The company claims that it felt a “deep responsibility to help” users who need it most, especially those who turn to the chatbot for personal reasons and mental health issues that they experience daily.
Alongside this, OpenAI also revealed that they are now working on a way for teens to add an emergency contact (with parental oversight) so that in these kinds of moments, its chatbot could connect teens to people “who can step-in.”
OpenAI vs. Teen Wrongful Death Lawsuit
According to CNET, this latest development from the AI company comes after the parents of a 16-year-old who took his life earlier this year filed a wrongful death lawsuit against OpenAI.
The teenager, Adam Reine, used ChatGPT during his mental health crisis, and the chatbot allegedly provided him with suicide methods and even validated his suicidal contemplation.
Additionally, it was revealed that the chatbot offered to write his suicide note five days ahead his death in April.
Generative AI and Mental Health Issues
Over the years, generative AI chatbots have been praised for their content-making abilities and features that make users feel like they’re talking to another person.
However, there have been potential risks raised on the tech. Lawsuits against AI companies regarding wrongful deaths are no longer new, with various companies facing lawsuits from bereaved family members who blamed AI for their untimely deaths.
For example, Character.AI, a platform that lets users talk to their favorite fictional characters, was blamed for the apparent death of a 14-year-old boy in Florida after months of talking with it.
The wife of a Belgian man also claimed that Eliza, a chatbot from the Chai app, instead encouraged her husband to take his own life rather than to push him to be better.