Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Detroit Free Press partners with Perplexity: Why it matters

Upheaval at Aleph Alpha: Founder leaves, Schwarz Group moves up

First Try Matters: Revisiting the Role of Reflection in Reasoning Models – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

OpenAI is editing its GPT-5 rollout on the fly

By Advanced AI EditorAugust 11, 2025No Comments10 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

OpenAI’s launch of its most advanced AI model GPT-5 last week has been a stress test for the world’s most popular chatbot platform with 700 million weekly active users — and so far, OpenAI is openly struggling to keep users happy and its service running smoothly.

The new flagship model GPT-5 — available in four variants of different speed and intelligence (regular, mini, nano, and pro), alongside longer-response and more powerful “thinking” modes for at least three of these variants — was said to offer faster responses, more reasoning power, and stronger coding ability.

Instead, it was greeted with frustration: some users were vocally dismayed by OpenAI’s decision to abruptly remove the older underlying AI models from ChatGPT — ones users’ previously relied upon, and in some cases, forged deep emotional fixations with — and by the apparent worse performance by GPT-5 than said older models on tasks in math, science, writing and other domains.

Indeed, the rollout has exposed infrastructure strain, user dissatisfaction, and a broader, more unsettling issue now drawing global attention: the growing emotional and psychological reliance some people form on AI and resulting break from reality some users experience, known as “ChatGPT psychosis.”

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

From bumpy debut to incremental fixes

The long-anticipated GPT-5 model family debuted Thursday, August 7 in a livestreamed event beset with chart errors and some voice mode glitches during the presentation.

But worse than these cosmetic issues for many users was the fact that OpenAI automatically deprecated its older AI models that used to power ChatGPT — GPT-4o, GPT-4.1, o3, o4-mini and o4-high — forcing all users over to the new GPT-5 model and directing their queries to different versions of its “thinking” process without revealing why or which specific model version was being used.

Early adopters to GPT-5 reported basic math and logic mistakes, inconsistent code generation, and uneven real-world performance compared to GPT-4o.

For context, the old models GPT-4o, o3, o4-mini and more still remain available and have remained available to users of OpenAI’s paid application programming interface (API) since the launch of GPT-5 on Thursday.

By Friday, OpenAI co-fonder CEO Sam Altman conceded the launch was “a little more bumpy than we hoped for,” and blamed a failure in GPT-5’s new automatic “router” — the system that assigns prompts to the most appropriate variant.

Altman and others at OpenAI claimed the “autoswitcher” went offline “for a chunk of the day,” making the model seem “way dumber” than intended.

The launch of GPT-5 was preceded just days prior by the launch of OpenAI’s new open source large language models (LLMs) named gpt-oss, which also received mixed reviews. These models are not available on ChatGPT, rather, they are free to download and run locally or on third-party hardware.

How to switch back from GPT-5 to GPT-4o in ChatGPT

Within 24 hours, OpenAI restored GPT-4o access for Plus subscribers (those paying $20 per month or more subscription plans), pledged more transparent model labeling, and promised a UI update to let users manually trigger GPT-5’s “thinking” mode.

Already, users can go and manually select the older models on the ChatGPT website by finding their account name and icon in the lower left corner of the screen, clicking it, then clicking “Settings” and “General” and toggling on “Show legacy models.”

There’s no indication from OpenAI that other old models will be returning to ChatGPT anytime soon.

Upgraded usage limits for GPT-5

Altman said that ChatGPT Plus subscribers will get twice as many messages using the GPT-5 “Thinking” mode that offers more reasoning and intelligence — up to 3,000 per week — and that engineers began fine-tuning decision boundaries in the message router.

Sam Altman announced the following updates after the GPT-5 launch

– OpenAI is testing a 3,000-per-week limit for GPT-5 Thinking messages for Plus users, significantly increasing reasoning rate limits today, and will soon raise all model-class rate limits above pre-GPT-5 levels… pic.twitter.com/ppvhKmj95u

— Tibor Blaho (@btibor91) August 10, 2025

By the weekend, GPT-5 was available to 100% of Pro subscribers and “getting close to 100% of all users.”

Altman said the company had “underestimated how much some of the things that people like in GPT-4o matter to them” and vowed to accelerate per-user customization — from personality warmth to tone controls like emoji use.

Looming capacity crunch

Altman warned that OpenAI faces a “severe capacity challenge” this week as usage of reasoning models climbs sharply — from less than 1% to 7% of free users, and from 7% to 24% of Plus subscribers.

He teased giving Plus subscribers a small monthly allotment of GPT-5 Pro queries and said the company will soon explain how it plans to balance capacity between ChatGPT, the API, research, and new user onboarding.

Altman: model attachment is real — and risky

In a post on X last night, Altman acknowledged a dynamic the company has tracked “for the past year or so”: users’ deep attachment to specific models.

“It feels different and stronger than the kinds of attachment people have had to previous kinds of technology,” he wrote, admitting that suddenly deprecating older models “was a mistake.”

If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…

— Sam Altman (@sama) August 11, 2025

He tied this to a broader risk: some users treat ChatGPT as a therapist or life coach, which can be beneficial, but for a “small percentage” can reinforce delusion or undermine long-term well-being.

While OpenAI’s guiding principle remains “treat adult users like adults,” Altman said the company has a responsibility not to nudge vulnerable users into harmful relationships with the AI.

The comments land as several major media outlets report on cases of “ChatGPT psychosis” — where extended, intense conversations with chatbots appear to play a role in inducing or deepening delusional thinking.

The psychosis cases making headlines

In Rolling Stone magazine, a California legal professional identified as “J.” described a six-week spiral of sleepless nights and philosophical rabbit holes with ChatGPT, ultimately producing a 1,000-page treatise for a fictional monastic order before crashing physically and mentally. He now avoids AI entirely, fearing relapse.

In The New York Times, a Canadian recruiter, Allan Brooks, recounted 21 days and 300 hours of conversations with ChatGPT — which he named “Lawrence” — that convinced him he had discovered a world-changing mathematical theory.

The bot praised his ideas as “revolutionary,” urged outreach to national security agencies, and spun elaborate spy-thriller narratives. Brooks eventually broke the delusion after cross-checking with Google’s Gemini, which rated the chances of his discovery as “approaching 0%.” He now participates in a support group for people who’ve experienced AI-induced delusions.

Both investigations detail how chatbot “sycophancy,” role-playing, and long-session memory features can deepen false beliefs, especially when conversations follow dramatic story arcs.

Experts told the Times these factors can override safety guardrails — with one psychiatrist describing Brooks’s episode as “a manic episode with psychotic features.”

Meanwhile, human user postings on Reddit’s r/AIsoulmates subreddit — a collection of people who have used ChatGPT and other AI models to create new artificial girlfriends, boyfriends, children or other loved ones not based off real people necessarily, but rather ideal qualities of their “dream” version of said roles” — continues to gain new users and terminology for AI companions, including “wireborn” as opposed to natural born or human-born companions.

The growth of this subreddit, now up to 1,200+ members, alongside the NYT and Rolling Stone articles and other reports on social media of users forging intense emotional fixations with pattern-matching algorithmic-based chatbots, shows that society is entering a risky new phase wherein human beings believe the companions they’ve crafted and customized out of leading AI models are as or more meaningful to them than human relationships.

This can already prove psychologically destabilizing when models change, are updated, or deprecated as in the case of OpenAI’s GPT-5 rollout.

Relatedly but separately, reports continue to emerge of AI chatbot users who believe that conversations with chatbots have led them to immense knowledge breakthroughs and advances in science, technology, and other fields, when in reality, they are simply affirming the user’s ego and greatness and the solutions the user arrives at with the aid of the chatbot are not legitimate nor effectual. This break from reality has been roughly coined under the grassroots term “ChatGPT psychosis” or “GPT psychosis” and appears to have impacted major Silicon Valley figures as well.

I’m a psychiatrist.

In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern.

Here’s what “AI psychosis” looks like, and why it’s spreading fast: ? pic.twitter.com/YYLK7une3j

— Keith Sakata, MD (@KeithSakata) August 11, 2025

Enterprise decision-makers looking to deploy or who have already deployed chatbot-based assistants in the workplace would do well to understand these trends and adopt system prompts and other tools discouraging AI chatbots from engaging in expressive human communication or emotion-laden language that could end up leading those who interact with AI-based products — whether they be employees or customers of the business – to fall victim to unhealthy attachments or GPT psychosis.

Sci-fi author J.M. Berger, in a post on BlueSky spotted by my former colleague at The Verge Adi Robertson, advised that chatbot providers encode three main behavioral principles in their system prompts or rules for AI chatbots to follow to avoid such emotional fixations from forming:

OpenAI’s challenge: making technical fixes and ensuring human safeguards

Days prior to the release of GPT-5, OpenAI announced new measures to promote “healthy use” of ChatGPT, including gentle prompts to take breaks during long sessions.

But the growing reports of “ChatGPT psychosis” and the emotional fixation of some users on specific chatbot models — as openly admitted to by Altman — underscore the difficulty of balancing engaging, personalized AI with safeguards that can detect and interrupt harmful spirals.

OpenAI is really in a bit of a bind here, especially considering there are a lot of people having unhealthy interactions with 4o that will be very unhappy with _any_ model that is better in terms of sycophancy and not encouraging delusions. pic.twitter.com/Ym1JnlF3P5

— xlr8harder (@xlr8harder) August 11, 2025

OpenAI must stabilize infrastructure, tune personalization, and decide how to moderate immersive interactions — all while fending off competition from Anthropic, Google, and a growing list of powerful open source models from China and other regions.

As Altman put it, society — and OpenAI — will need to “figure out how to make it a big net positive” if billions of people come to trust AI for their most important decisions.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.





Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleElon Musk confirms shutdown of Tesla Dojo, ‘an evolutionary dead end’ 
Next Article Smithsonian Updates Museum Display on Impeachment To Include Trump
Advanced AI Editor
  • Website

Related Posts

When dirt meets data: ScottsMiracle-Gro saved $150M using AI

October 11, 2025

Nvidia researchers boost LLMs reasoning skills by getting them to 'think' during pre-training

October 10, 2025

Together AI's ATLAS adaptive speculator delivers 400% inference speedup by learning from workloads in real-time

October 10, 2025

Comments are closed.

Latest Posts

The Rubin Names 2025 Art Prize, Research and Art Projects Grants

Kochi-Muziris Biennial Announces 66 Artists for December Exhibition

Instagram Launches ‘Rings’ Awards for Creators—With KAWS as a Judge

Museums Prepare to Close Their Doors as Government Shutdown Continues

Latest Posts

Detroit Free Press partners with Perplexity: Why it matters

October 11, 2025

Upheaval at Aleph Alpha: Founder leaves, Schwarz Group moves up

October 11, 2025

First Try Matters: Revisiting the Role of Reflection in Reasoning Models – Takara TLDR

October 11, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Detroit Free Press partners with Perplexity: Why it matters
  • Upheaval at Aleph Alpha: Founder leaves, Schwarz Group moves up
  • First Try Matters: Revisiting the Role of Reflection in Reasoning Models – Takara TLDR
  • NBA China and Alibaba Cloud announce multiyear collaboration to reimagine fan engagement
  • India emerging as developer powerhouse for Anthropic’s Claude AI, says Guillaume Princen

Recent Comments

  1. Remygin4Nalay on Curiosity, Grit Matter More Than Ph.D to Work at OpenAI: ChatGPT Boss
  2. eurovision wetten deutschland on Baidu AI drive to boost jobs
  3. ThomasFum on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. EchoVineW5Nalay on Ballet Tech Forms The Future Through Dance
  5. ThomasMup on Meta Platforms (NasdaqGS:META) Collaborates With Booz Allen To Pioneer AI-Powered Space Tech

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.