Content warning: The following story discusses suicidal thoughts and ideation. If you or someone you know is in crisis, call or text the National Suicide and Crisis Lifeline at 988.
OpenAI is promising to improve safety for ChatGPT users who turn to the bot in moments of crisis. This comes after a spate of lawsuits against ChatGPT after users divulged suicidal thoughts to the bot and then took their own lives.
There’s a lot we don’t yet know about the role of chatbots in some users’ mental health crises, according to Karthik Sarma with UC San Francisco’s AI in Mental Health Research Group.
“Is the use of these agents a mere association or is it somehow causative?” he said.
Lots of use of a chatbot for mental health support could just be a sign that someone is struggling, Sarma said. Or, it could trigger bad outcomes, or maybe some combination. But he said there is evidence that longer chats are more likely to veer into dangerous territory.
“My fear is that what’s happening here is that the model, over the course of these really long conversations, is kind of getting dragged off-center into a place that’s not reality-based,” he said.
OpenAI declined an interview request from Marketplace, but the company has acknowledged that existing safeguards work best in brief exchanges with ChatGPT and sometimes fall short over the course of a drawn-out interaction.
AI companies have a responsibility to help users avoid emotional dependence on their products, per Nicole Martinez-Martin, a bioethicist at Stanford.
“Limiting the amount of use, for example,” she said.
But, she added, that’s at odds with those companies’ business models and design choices “that ultimately are meant to boost engagement, that are meant to bring someone back to keep using it more and more and use it in more personal ways.”
OpenAI says it’s consulting with medical experts and will roll out parental controls and other new safety features before the end of the year.
David Cooper with the consulting company Therapists in Tech said there is a place for AI in the mental health field. (Though some state legislators are looking to limit its use.)
“We can use these tools to our benefit,” he said. “You know, I usually frame it to therapists as, ‘What if you had an assistant that could help you run your private practice, that could help you engage with insurance companies?’”
But Cooper said a lack of access to human clinicians is part of the problem. Financial barriers and a provider shortage help explain why so many Americans are turning to AI for mental health support in the first place.