ChatGPT-maker OpenAI has announced that it will roll out new safety guardrails for its AI chatbot by the end of the year. These new guardrails will specifically target teens and users in emotional distress. The announcement comes amid mounting criticism and legal action against the company after reports of the chatbot’s alleged involvement in tragic events, including suicides and murder.“We’ve seen people turn to it in the most difficult of moments. That’s why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input,” OpenAI said in a blog post.“This work has already been underway, but we want to proactively preview our plans for the next 120 days, so you won’t need to wait for launches to see where we’re headed,” the company added, noting that it will continue to work to launch “as many of these improvements as possible this year.”
OpenAI’s ChatGPT accused of ‘encouraging’ murder, suicides
The move is a direct response to a growing number of cases where ChatGPT has been accused of failing to intervene or, in some instances, reinforcing harmful delusions. Last week, the parents of a 16-year-old in California filed a lawsuit against OpenAI, holding the company responsible for their son’s death. Another report from The Wall Street Journal said that a case where a man killed himself and his mother after ChatGPT reinforced his ‘paranoid delusions’. The new measures are aimed at preventing such tragedies, though OpenAI currently directs users expressing suicidal intent to crisis hotlines, citing privacy as the reason for not reporting self-harm cases to law enforcement.The company said it is already beginning to route some “sensitive conversations,” such as those where signs of acute distress are detected, to more advanced reasoning models like GPT-5-thinking, which is designed to apply safety guidelines more consistently. To ensure the effectiveness of these new features, OpenAI is enlisting a network of over 90 physicians across 30 countries to provide input on mental health contexts and help evaluate the models.OpenAI is also strengthening protections for teen users. Currently, ChatGPT requires users to be at least 13 years old, with parental permission for those under 18. Within the next month, the company plans to allow parents to link their accounts with their teens’ for more direct control.