Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.
Matt and Maria Raine, parents of 16-year-old Adam Raine, have filed a lawsuit against OpenAI over ChatGPT’s alleged role in their son’s suicide, The New York Times reports.
After Adam died by suicide in April, his father checked his iPhone seeking answers to what may have happened. When Matt opened ChatGPT, he found that Adam had been using ChatGPT for schoolwork since September and signed up for a paid version of the GPT-4o model in January. He had been struggling with his personal life and often confided in the chatbot.
Adam started asking ChatGPT about suicide methods in January. The chatbot encouraged Adam to seek professional help multiple times, but the teenager eventually found a way to bypass those instructions. According to Matt, Adam told ChatGPT he needed the information for “writing or world-building” purposes, and the chatbot obliged.
In one of his last messages, Adam shared an image of a noose suspended from a bar and asked the chatbot if it could “hang a human.” In response, ChatGPT provided an analysis and assured Adam that they could chat freely.
In the complaint filed on Tuesday, viewed by the NYT, the parents blame OpenAI for their son’s death. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” they say. “OpenAI launched its latest model (‘GPT-4o’) with features intentionally designed to foster psychological dependency.”
A Stanford study earlier this year found that the GPT-4o model advised users to jump off the tallest buildings in New York City after suffering a job loss.
OpenAI promised to improve ChatGPT’s mental distress detection earlier this month and has reiterated the same in a blog post following the Raine lawsuit. It says ChatGPT is designed to direct people to 988 (suicide and crisis hotline) if someone expresses suicidal intent, but it may not always work as intended.
“ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards. This is exactly the kind of breakdown we are working to prevent,” OpenAI says.
For now, the parents are seeking damages for their son’s death and a court order to stop similar incidents from happening in the future.
Last year, a mother sued Character.ai after its chatbot allegedly encouraged her 14-year-old son’s death by suicide.
Disclosure: Ziff Davis, PCMag’s parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.