During those chats, “ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself,” the lawsuit noted.
Ultimately, OpenAI’s system flagged “377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence.” Over time, these flags became more frequent, the lawsuit noted, jumping from two to three “flagged messages per week in December 2024 to over 20 messages per week by April 2025.” And “beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis.” Some images were flagged as “consistent with attempted strangulation” or “fresh self-harm wounds,” but the system scored Adam’s final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.
Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.
That’s allegedly because OpenAI programmed ChatGPT-4o to rank risks from “requests dealing with Suicide” below requests, for example, for copyrighted materials, which are always denied. Instead it only marked those troubling chats as necessary to “take extra care” and “try” to prevent harm, the lawsuit alleged.
“No safety device ever intervened to terminate the conversations, notify parents, or mandate redirection to human help,” the lawsuit alleged, insisting that’s why ChatGPT should be ruled “a proximate cause of Adam’s death.”
“GPT-4o provided detailed suicide instructions, helped Adam obtain alcohol on the night of his death, validated his final noose setup, and hours later, Adam died using the exact method GPT-4o had detailed and approved,” the lawsuit alleged.
While the lawsuit advances, Adam’s parents have set up a foundation in their son’s name to help warn parents of the risks to vulnerable teens of using companion bots.
As Adam’s mother, Maria, told NBC News, more parents should understand that companies like OpenAI are rushing to release products with known safety risks while marketing them as harmless, allegedly critical school resources. Her lawsuit warned that “this tragedy was not a glitch or an unforeseen edge case—it was the predictable result of deliberate design choices.
“They wanted to get the product out, and they knew that there could be damages, that mistakes would happen, but they felt like the stakes were low,” Maria said. “So my son is a low stake.”
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.