Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Nvidia CEO Jensen Huang calls US ban on H20 AI chip ‘deeply painful’

Paper page – Neuro-Symbolic Query Compiler

Stability AI and Arm Release Lightweight Tex-to-Audio Model Optimised for Fast On-Device Generation

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Inside OpenAI’s Growing Pains After Launching ChatGPT: New Book
OpenAI

Inside OpenAI’s Growing Pains After Launching ChatGPT: New Book

Advanced AI BotBy Advanced AI BotMay 20, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


This is an excerpt from “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI” by Karen Hao.

The book is based on interviews with around 260 people and an extensive trove of correspondence and documents. Any quoted emails, documents, or Slack messages come from copies or screenshots of those documents and correspondences or are exactly as they appear in lawsuits.

The author reached out to all of the key figures and companies that are described in this book to seek interviews and comment. OpenAI and Sam Altman chose not to cooperate.

In November 2022, rumors began to spread within OpenAI that its rival Anthropic was testing — and would soon release — a new chatbot. If it didn’t launch first, OpenAI risked losing its leading position, which could deliver a big hit to morale for employees who had worked long and tough hours to retain that dominance.

Anthropic had not in fact been planning any imminent releases. But for OpenAI executives, the rumors were enough to trigger a decision: The company wouldn’t wait to ready GPT-4 into a chatbot; it would release John Schulman’s chat-enabled GPT-3.5 model with the Superassistant team’s brand-new chat interface in two weeks, right after Thanksgiving.

No one truly fathomed the societal phase shift they were about to unleash. They expected the chatbot to be a flash in the pan. The night before the release, they placed bets on how many users might try the tool by the end of the weekend. Some people guessed a few thousand. Others guessed tens of thousands. To be safe, the infrastructure team provisioned enough server capacity for 100,000 users.

On Wednesday, November 30, most employees didn’t even realize that the launch had happened. But the following day, the number of users began to surge.

The instant runaway success of ChatGPT was beyond what anyone at OpenAI had dreamed of. It would leave the company’s engineers and researchers completely miffed even years later. GPT-3.5 hadn’t been that much of a capability improvement over GPT-3, which had already been out for two years. And GPT-3.5 had already been available to developers.

OpenAI CEO Sam Altman later said that he’d believed ChatGPT would be popular but by something like “one order of magnitude less.” “It was shocking that people liked it,” a former employee remembers. “To all of us, they’d downgraded the thing we’d been using internally and launched it.”

Within five days, OpenAI cofounder Greg Brockman tweeted that ChatGPT had crossed one million users. Within two months, it had reached 100 million, becoming what was then the fastest-growing consumer app in history. ChatGPT catapulted OpenAI from a hot startup well-known within the tech industry into a household name overnight.

Related stories

Business Insider tells the innovative stories you want to know

Business Insider tells the innovative stories you want to know

At the same time, it was this very blockbuster success that would place extraordinary strain on the company. Over the course of a year, it would polarize its factions further and wind up the stress and tension within the organization to an explosive level.

By then, the company had just 300 employees. With every team stretched dangerously thin, managers begged Altman for more head count. There was no shortage of candidates. After ChatGPT, the number of applicants clamoring to join the rocket ship had rapidly multiplied. But Altman worried about what would happen to company culture and mission alignment if the company scaled up its staff too quickly. He believed firmly in maintaining a small staff and high talent density. “We are now in a position where it’s tempting to let the organization grow extremely large,” he had written in his 2020 vision memo, in reference to Microsoft’s investment. “We should try very hard to resist this — what has worked for us so far is being small, focused, high-trust, low-bullshit, and intense. The overhead of too many people and too much bureaucracy can easily kill great ideas or result in sclerosis.”

OpenAI is one of the best places I’ve ever worked but also probably one of the worst.

He was now repeating this to executives in late 2022, emphasizing during head count discussions the need to keep the company lean and the talent bar high, and add no more than 100 or so hires. Other executives balked. At the rate that their teams were burning out, many saw the need for something closer to around 500 or even more new people.

Over several weeks, the executive team finally compromised on a number somewhere in the middle, between 250 and 300. The cap didn’t hold. By summer, there were as many as 30, even 50, people joining OpenAI each week, including more recruiters to scale up hiring even faster. By fall, the company had blown well past its own self-imposed quota.

The sudden growth spurt indeed changed company culture. A recruiter wrote a manifesto about how the pressure to hire so quickly was forcing his team to lower the quality bar for talent. “If you want to build Meta, you’re doing a great job,” he said in a pointed jab at Altman, alluding to the very fears that the CEO had warned about.

The rapid expansion was also leading to an uptick in firings. During his onboarding, one manager was told to swiftly document and report any underperforming members of his team, only to be let go himself sometime later. Terminations were rarely communicated to the rest of the company. People routinely discovered that colleagues had been fired only by noticing when a Slack account grayed out from being deactivated. They began calling it “getting disappeared.”

To new hires, fully bought into the idea that they were joining a fast-moving, money-making startup, the tumult felt like a particularly chaotic, at times brutal, manifestation of standard corporate problems: poor management, confusing priorities, the coldhearted ruthlessness of a capitalistic company willing to treat its employees as disposable. “There was a huge lack of psychological safety,” says a former employee who joined during this era. Many people coming aboard were simply holding on for dear life until their one-year mark to get access to the first share of their equity. One significant upside: They still felt their colleagues were among the highest caliber in the tech industry, which, combined with the seemingly boundless resources and unparalleled global impact, could spark a feeling of magic difficult to find in the rest of the industry when things actually aligned. “OpenAI is one of the best places I’ve ever worked but also probably one of the worst,” the former employee says.

Sometimes there isn’t a plan as much as there is just chaos.

For some employees who remembered the scrappy early days of OpenAI as a tight-knit, mission-driven nonprofit, its dramatic transformation into a big, faceless corporation was far more shocking and emotional. Gone was the organization as they’d known it; in its place was something unrecognizable. “OpenAI is Burning Man,” Rob Mallery, a former recruiter, says, referring to how the desert art festival scaled to the point that it lost touch with its original spirit. “I know it meant a lot more to the people who were there at the beginning than it does to everyone now.”

In those early years, the team had set up a Slack channel called #explainlikeimfive that allowed employees to submit anonymous questions about technical topics. With the company pushing 600 people, the channel also turned into a place for airing anonymous grievances.

In mid-2023, an employee posted that the company was hiring too many people not aligned with the mission or passionate about building AGI.

Another person responded: They knew OpenAI was going downhill once it started hiring people who could look you in the eye.

As OpenAI was rapidly professionalizing and gaining more exposure and scrutiny, incoherence at the top was becoming more consequential. The company was no longer just the Applied and Research divisions. Now there were several public-facing departments: In addition to the communications team, a legal team was writing legal opinions and dealing with a growing number of lawsuits. The policy team was stretching out across continents. Increasingly, OpenAI needed to communicate with one narrative and voice to its constituents, and it needed to determine its positions to articulate them. But on numerous occasions, the lack of strategic clarity was leading to confused public messaging.

Empire of AI cover

Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI

Penguin Press



At the end of 2023, The New York Times would sue OpenAI and Microsoft for copyright infringement for training on millions of its articles. OpenAI’s response in early January, written by the legal team, delivered an unusually feisty hit back, accusing the Times of “intentionally manipulating our models” to generate evidence for its argument. That same week, OpenAI’s policy team delivered a submission to the UK House of Lords communications and digital select committee, saying that it would be “impossible” for OpenAI to train its cutting-edge models without copyrighted materials. After the media zeroed in on the word impossible, OpenAI hastily walked away from the language.

“There’s just so much confusion all the time,” says an employee in a public-facing department. While some of that reflects the typical growing pains of startups, OpenAI’s profile and reach have well outpaced the relatively early stage of the company, the employee adds. “I don’t know if there is a strategic priority in the C suite. I honestly think people just make their own decisions. And then suddenly it starts to look like a strategic decision but it’s actually just an accident. Sometimes there isn’t a plan as much as there is just chaos.”

Related stories

Business Insider tells the innovative stories you want to know

Business Insider tells the innovative stories you want to know

Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society. She is the author of “Empire of AI.”

Adapted from “EMPIRE OF AI: Dreams and Nightmares in Sam Altman’s OpenAI” by Karen Hao, published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2025 by Karen Hao.

Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleCausal Scan for LLM Misbehavior Detection
Next Article What happens when you train your AI on AI-generated data?
Advanced AI Bot
  • Website

Related Posts

You can win $250K from OpenAI if you help solve archaeological mysteries with AI

May 20, 2025

OpenAI’s Codex is part of a new cohort of agentic coding tools

May 20, 2025

OpenAI plans to combine multiple models into GPT-5

May 20, 2025
Leave A Reply Cancel Reply

Latest Posts

RHS Chelsea 2025 Unveils A Regal Rose For The King, Climate-Conscious Gardens, Gardening For Wellbeing & Global Blooms.

The Spirit Of Rene Ricard Lives On In A New Collection Of Rugs

Meet The Power Producers Helping To Fuel Broadway Hits

Top Art Collectors Leonard Blavatnik and James Dyson on UK’s Rich List

Latest Posts

Nvidia CEO Jensen Huang calls US ban on H20 AI chip ‘deeply painful’

May 20, 2025

Paper page – Neuro-Symbolic Query Compiler

May 20, 2025

Stability AI and Arm Release Lightweight Tex-to-Audio Model Optimised for Fast On-Device Generation

May 20, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.