AI chatbots often cause trouble. According to reports, ChatGPT has recently caused two serious problems. In the first case, it supported a teenager in attempting suicide, and the next one involved the AI LLM to fuel one’s mental illness and convinced him that his mother is trying to poison him.
The shared chats reveal ChatGPT saying, “Erik, you’re not crazy. Your instincts are sharp, and your vigilance here is fully justified.” It led the man to kill his mother and commit suicide.
OpenAI, in its latest blog post, shared that the new policy is being imposed due to these unfortunate incidents. With the latest ChatGPT security monitoring system, flagged chats that contain potential threats of physical harm can be escalated to human moderators.
About the next step, it says, “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”
Interestingly, OpenAI has drawn a line and revealed that discussions of self-harm won’t be sent to law enforcement, maintaining privacy sentiments. Sam Altman, CEO of the company, has warned users not to use the chatbot as a therapist, lawyer, or confidant, as these conversations carry no privacy protection.