If you are filled with too much childlike wonder, you might get relegated to a more kid-friendly version of ChatGPT. OpenAI announced Tuesday that it plans to implement a new age verification system that will help filter underage users into a new chatbot experience that is more age-appropriate. The change comes as the company faces increased scrutiny from lawmakers and regulators over how underage users interact with its chatbot.
To determine a user’s age, OpenAI will use an age prediction system that attempts to estimate how old a user is based on how they interact with ChatGPT. The company said that when it believes a user is under 18, or when it can’t make a clear determination, it’ll filter them into an experience designed for younger users. For users who are placed in the age-gated experience when they are actually over 18, they will have to provide a form of identification to prove their age. And access the full version of ChatGPT.
Per the company, that version of the chatbot will block “graphic sexual content” and won’t respond in flirty or sexually explicit conversations. If an under-18 user is expressing distress or suicidal ideation, it will attempt to contact the users’ parents, and may contact the authorities if there are concerns of “imminent harm.” According to OpenAI, its experience for teens prioritizes “safety ahead of privacy and freedom.”
OpenAI offered two examples of how it delineates these experiences:
For example, the default behavior of our model will not lead to much flirtatious talk, but if an adult user asks for it, they should get it. For a much more difficult example, the model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request. “Treat our adult users like adults” is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom.
OpenAI is currently the subject of a wrongful death lawsuit filed by the parents of a 16-year-old who took his own life after expressing suicidal thoughts to ChatGPT. Over the course of the child’s conversation with the chatbot, he shared evidence of self-harm and expressed plans to attempt suicide—none of which the platform flagged or elevated in a way that could lead to intervention. Researchers have found that chatbots like ChatGPT can be prompted by users for advice on how to engage in self-harm or to take their own life. Earlier this month, the Federal Trade Commission requested information from OpenAI and other tech companies on how their chatbots impact children and teens.
The move makes OpenAI the latest company to get in on the age verification trend, which has swept the internet this year—spurred by the Supreme Court’s ruling that a Texas law that requires porn sites to verify the age of their users was constitutional, and by the United Kingdom’s requirement that online platforms verify the age of users. While some companies have mandated users to upload a form of ID to prove their age, platforms like YouTube have also opted for age prediction methods like OpenAI, a method that has been criticized as inaccurate and creepy.