Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI’s research on AI models deliberately lying is wild 

C3.ai, Rapid7, 8×8, monday.com, and Confluent Shares Are Soaring, What You Need To Know

Scale visual production using Stability AI Image Services in Amazon Bedrock

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

OpenAI Acknowledges the Teen Problem

By Advanced AI EditorSeptember 18, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


On Tuesday afternoon, three parents sat in a row before the Senate Judiciary Subcommittee on Crime and Counterterrorism. Two of them had each recently lost a child to suicide; the third has a teenage son who, after cutting his arm in front of her and biting her, is undergoing residential treatment. All three blame generative AI for what has happened to their children.

They had come to testify on what appears to be an emerging health crisis in teens’ interactions with AI chatbots. “What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son hanged himself after ChatGPT instructed him on how to set up the noose, according to his lawsuit against OpenAI. This summer, he and his wife sued OpenAI for wrongful death. (OpenAI has said that the firm is “deeply saddened by Mr. Raine’s passing” and that although ChatGPT includes a number of safeguards, they “can sometimes become less reliable in long interactions.”) The nation needs to hear about “what these chatbots are engaged in, about the harms that are being inflicted upon our children,” Senator Josh Hawley said in his opening remarks.

Even as OpenAI and its rivals promise that generative AI will reshape the world, the technology is replicating old problems, albeit with a new twist. AI models not only have the capacity to expose users to disturbing material—about dark or controversial subjects found in their training data, for example; they also produce perspectives on that material themselves. Chatbots can be persuasive, have a tendency to agree with users, and may offer guidance and companionship to kids who would ideally find support from peers or adults. Common Sense Media, a nonprofit that advocates for child safety online, has found that a number of AI chatbots and companions can be prompted to encourage self-mutilation and disordered eating to teenage accounts. The two parents speaking to the Senate alongside Raine are suing Character.AI, alleging that the firm’s role-playing AI bots directly contributed to their children’s actions. (A spokesperson for Character.AI told us that the company sends its “deepest sympathies” to the families and pointed us to safety features the firm has implemented over the past year.)

Read: ChatGPT gave instructions for murder, self-mutilation, and devil worship

AI firms have acknowledged these problems. In advance of Tuesday’s hearing, OpenAI published two blog posts about teen safety on ChatGPT, one of which was written by the company’s CEO, Sam Altman. He wrote that the company is developing an “age-prediction system” that would estimate a user’s age—presumably to detect if someone is under 18 years old—based on ChatGPT usage patterns. (Currently, anyone can access and use ChatGPT without verifying their age.) Altman also referenced some of the particular challenges raised by generative AI: “The model by default should not provide instructions about how to commit suicide,” he wrote, “but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.” But it should not discuss suicide, he said, even in creative-writing settings, with users determined to be under 18. In addition to the age gate, the company said it will implement parental controls by the end of the month to allow parents to intervene directly, such as by setting “blackout hours when a teen cannot use ChatGPT.”

The announcement, sparse on specific details, captured the trepidation and lingering ambivalences that AI companies have about policing young users, even as OpenAI begins to implement these basic features nearly three years after the launch of ChatGPT. A spokesperson for OpenAI, which has a corporate partnership with The Atlantic, declined to respond to a detailed list of questions about the firm’s future teen safeguards, including when the age-prediction system will be implemented. “People sometimes turn to ChatGPT in sensitive moments, so we’re working to make sure it responds with care,” the spokesperson told us. Other leading AI firms have also been slow to devise teen-specific protections, even though they have catered to young users. Google Gemini, for instance, has a version of its chatbot for children under 13, and another version for teenagers (the latter had a graphic conversation with our colleague Lila Shroff when she posed as a 13-year-old).

From the August 2025 issue: Sexting with Gemini

This is a familiar story in many respects. Anyone who has paid attention to the issues presented by social media could have foreseen that chatbots, too, would present a problem for teens. Social-media sites have long neglected to restrict eating-disorder content, for instance, and Instagram permitted graphic depictions of self-mutilation until 2019. Yet like the social-media giants before them, generative-AI companies have decided to “move as fast as possible, break as much as possible, and then deal with the consequences,” danah boyd, a communication professor at Cornell who has often written on teenagers and the internet (and who styles her name in lowercase), told us.

In fact, the problems are now so clearly established that platforms are finally beginning to make voluntary changes to address them. For example, last year, Instagram introduced a number of default safeguards for minors, such as enrolling their accounts into the most restrictive content filter by default. Yet tech companies now also have to contend with a wave of legislation in the United Kingdom, parts of the United States, and elsewhere that compel internet companies to directly verify the ages of their users. Perhaps the desire to avoid regulation is another reason OpenAI is proactively adopting an age-estimating feature, though Altman’s post also says that the company may ask for ID “in some cases or countries.”

Many major social-media companies are also experimenting with AI systems that estimate a user’s age based on how they act online. When such a system was explained during a TikTok hearing in 2023, Representative Buddy Carter of Georgia interrupted: “That’s creepy!” And that response makes sense—to determine the age of every user, “you have to collect a lot more data,” boyd said. For social-media companies, that means monitoring what users like, what they click on, how they’re speaking, whom they’re talking to; for generative-AI firms, it means drawing conclusions from the otherwise-private conversations an individual is having with a chatbot that presents itself as a trustworthy companion. Some critics also argue that age-estimation systems infringe on free-speech rights because they limit access to speech based on one’s ability to produce government identification or a credit card.

OpenAI’s blog post notes that “we prioritize teen safety ahead of privacy and freedom,” though it is not clear about how much information OpenAI will collect, nor whether it will need to keep some kind of persistent record of user behavior to make the system workable. The company has also not been altogether transparent about the material that teens will be protected from. The only two use cases of ChatGPT that the company specifically mentions as being inappropriate for teenagers are sexual content and discussion of self-mutilation or suicide. The OpenAI spokesperson did not provide any more examples. Numerous adults have developed paranoid delusions after extended use of ChatGPT. The technology can make up completely imaginary information and events. Are these not also potentially dangerous types of content?

And what about the more existential concern parents might have about their kids talking to a chatbot constantly, as if it is a person, even if everything the bot says is technically aboveboard? The OpenAI blog posts touch glancingly on this topic, gesturing toward the worry that parents may have about their kids using ChatGPT too much and developing too intense of a relationship with it.

Such relationships are, of course, among generative AI’s essential selling points: a seemingly intelligent entity that morphs in response to every query and user. Humans and their problems are messy and fickle; ChatGPT’s responses will be individual and its failings unpredictable in kind. Then again, social-media empires have been accused for years of pushing children toward self-harm, disordered eating, exploitative sexual encounters, and suicide. In June, on the first episode of OpenAI’s podcast, Altman said, “One of the big mistakes of the social-media era was the feed algorithms had a bunch of unintended negative consequences on society as a whole and maybe even individual users.” For many years, he has been fond of saying that AI will be made safe through “contact with reality”; by now, OpenAI and its competitors should see that some collisions may be catastrophic.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMIT investigates multiple reports of swastikas, pro-Palestinian graffiti on campus
Next Article AWS adds fully managed AI models: Qwen3 and DeepSeek-V3.1
Advanced AI Editor
  • Website

Related Posts

OpenAI launches teen-safe ChatGPT with parental controls

September 18, 2025

OpenAI’s Teen Safety Features Will Walk a Thin Line

September 18, 2025

OpenAI, Anthropic usage data shows economic divide

September 17, 2025

Comments are closed.

Latest Posts

Jackson Pollock Masterpiece Found to Contain Extinct Manganese Blue

Marian Goodman Adds Edith Dekyndt, New Gagosian Director: Industry Moves

How Much to Pay for Emerging Artists’ Work? Art Adviser Says $15,000 Max

Basquiat Biopic ‘Samo Lives’ Filming in East Village

Latest Posts

OpenAI’s research on AI models deliberately lying is wild 

September 19, 2025

C3.ai, Rapid7, 8×8, monday.com, and Confluent Shares Are Soaring, What You Need To Know

September 18, 2025

Scale visual production using Stability AI Image Services in Amazon Bedrock

September 18, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI’s research on AI models deliberately lying is wild 
  • C3.ai, Rapid7, 8×8, monday.com, and Confluent Shares Are Soaring, What You Need To Know
  • Scale visual production using Stability AI Image Services in Amazon Bedrock
  • AWS adds fully managed AI models: Qwen3 and DeepSeek-V3.1
  • OpenAI Acknowledges the Teen Problem

Recent Comments

  1. goofybeetle9Nalay on Foundation AI: Cisco launches AI model for integration in security applications
  2. criskis7Nalay on Foundation AI: Cisco launches AI model for integration in security applications
  3. RonaldNeefe on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. RonaldNeefe on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Eddiegox on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.