Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Vibe coding has turned senior devs into ‘AI babysitters,’ but they say it’s worth it

Anthropic Claude now has memory, catching up to competitors Gemini and ChatGPT

Karen Hao on the Empire of AI, AGI evangelists, and the cost of belief

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
TechCrunch AI

Karen Hao on the Empire of AI, AGI evangelists, and the cost of belief

By Advanced AI EditorSeptember 14, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


At the center of every empire is an ideology, a belief system that propels the system forward and justifies expansion – even if the cost of that expansion directly defies the ideology’s stated mission.

For European colonial powers, it was Christianity and the promise of saving souls while extracting resources. For today’s AI empire, it’s artificial general intelligence to “benefit all humanity.” And OpenAI is its chief evangelist, spreading zeal across the industry in a way that has reframed how AI is built. 

“I was interviewing people whose voices were shaking from the fervor of their beliefs in AGI,” Karen Hao, journalist and bestselling author of “Empire of AI,” told TechCrunch on a recent episode of Equity. 

In her book, Hao likens the AI industry in general, and OpenAI in particular, to an empire. 

“The only way to really understand the scope and scale of OpenAI’s behavior…is actually to recognize that they’ve already grown more powerful than pretty much any nation state in the world, and they’ve consolidated an extraordinary amount of not just economic power, but also political power,” Hao said. “They’re terraforming the Earth. They’re rewiring our geopolitics, all of our lives. And so you can only describe it as an empire.”

OpenAI has described AGI as “a highly autonomous system that outperforms humans at most economically valuable work,” one that will somehow “elevate humanity by increasing abundance, turbocharging the economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.” 

These nebulous promises have fueled the industry’s exponential growth — its massive resource demands, oceans of scraped data, strained energy grids, and willingness to release untested systems into the world. All in service of a future that many experts say may never arrive.

Techcrunch event

San Francisco
|
October 27-29, 2025

Hao says this path wasn’t inevitable, and that scaling isn’t the only way to get more advances in AI. 

“You can also develop new techniques in algorithms,” she said. “You can improve the existing algorithms to reduce the amount of data and compute that they need to use.”

But that tactic would have meant sacrificing speed. 

“When you define the quest to build beneficial AGI as one where the victor takes all — which is what OpenAI did — then the most important thing is speed over anything else,” Hao said. “Speed over efficiency, speed over safety, speed over exploratory research.”

Open AI Chief Executive Officer Sam Altman speaks during the Kakao media day in Seoul.
Image Credits:Kim Jae-Hwan/SOPA Images/LightRocket / Getty Images

For OpenAI, she said, the best way to guarantee speed was to take existing techniques and “just do the intellectually cheap thing, which is to pump more data, more supercomputers, into those existing techniques.”

OpenAI set the stage, and rather than fall behind, other tech companies decided to fall in line. 

“And because the AI industry has successfully captured most of the top AI researchers in the world, and those researchers no longer exist in academia, then you have an entire discipline now being shaped by the agenda of these companies, rather than by real scientific exploration,” Hao said.

The spend has been, and will be, astronomical. Last week, OpenAI said it expects to burn through $115 billion in cash by 2029. Meta said in July that it would spend up to $72 billion on building AI infrastructure this year. Google expects to hit up to $85 billion in capital expenditures for 2025, most of which will be spent on expanding AI and cloud infrastructure. 

Meanwhile, the goal posts keep moving, and the loftiest “benefits to humanity” haven’t yet materialized, even as the harms mount. Harms like job loss, concentration of wealth, and AI chatbots that fuel delusions and psychosis. In her book, Hao also documents workers in developing countries like Kenya and Venezuela who were exposed to disturbing content, including child sexual abuse material, and were paid very low wages — around $1 to $2 an hour — in roles like content moderation and data labeling.

Hao said it’s a false tradeoff to pit AI progress against present harms, especially when other forms of AI offer real benefits.

She pointed to Google DeepMind’s Nobel Prize-winning AlphaFold, which is trained on amino acid sequence data and complex protein folding structures, and can now accurately predict the 3D structure of proteins from their amino acids — profoundly useful for drug discovery and understanding disease.

“Those are the types of AI systems that we need,” Hao said. “AlphaFold does not create mental health crises in people. AlphaFold does not lead to colossal environmental harms … because it’s trained on substantially less infrastructure. It does not create content moderation harms because [the datasets don’t have] all of the toxic crap that you hoovered up when you were scraping the internet.” 

Alongside the quasi-religious commitment to AGI has been a narrative about the importance of racing to beat China in the AI race, so that Silicon Valley can have a liberalizing effect on the world. 

“Literally, the opposite has happened,” Hao said. “The gap has continued to close between the U.S. and China, and Silicon Valley has had an illiberalizing effect on the world … and the only actor that has come out of it unscathed, you could argue, is Silicon Valley itself.”

Of course, many will argue that OpenAI and other AI companies have benefitted humanity by releasing ChatGPT and other large language models, which promise huge gains in productivity by automating tasks like coding, writing, research, customer support, and other knowledge-work tasks. 

But the way OpenAI is structured — part non-profit, part for-profit — complicates how it defines and measures its impact on humanity. And that’s further complicated by the news this week that OpenAI reached an agreement with Microsoft that brings it closer to eventually going public.

Two former OpenAI safety researchers told TechCrunch that they fear the AI lab has begun to confuse its for-profit and non-profit missions — that because people enjoy using ChatGPT and other products built on LLMs, this ticks the box of benefiting humanity.

Hao echoed these concerns, describing the dangers of being so consumed by the mission that reality is ignored.

“Even as the evidence accumulates that what they’re building is actually harming significant amounts of people, the mission continues to paper all of that over,” Hao said. “There’s something really dangerous and dark about that, of [being] so wrapped up in a belief system you constructed that you lose touch with reality.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleVerizon gives customers another reason to be angry
Next Article Anthropic Claude now has memory, catching up to competitors Gemini and ChatGPT
Advanced AI Editor
  • Website

Related Posts

Vibe coding has turned senior devs into ‘AI babysitters,’ but they say it’s worth it

September 14, 2025

‘Selling coffee beans to Starbucks’ – how the AI boom could leave AI’s biggest companies behind

September 14, 2025

California lawmakers pass AI safety bill SB 53 — but Newsom could still veto

September 13, 2025

Comments are closed.

Latest Posts

Ohio Auction of Two Paintings Looted By Nazis Halted By Foundation

Lee Ufan Painting at Center of Bribery Investigation in Korea

Drought Reveals 40 Ancient Tombs in Northern Iraqi Reservoir

Artifacts Removed from Gaza Building Before Suspected Israeli Strike

Latest Posts

Vibe coding has turned senior devs into ‘AI babysitters,’ but they say it’s worth it

September 14, 2025

Anthropic Claude now has memory, catching up to competitors Gemini and ChatGPT

September 14, 2025

Karen Hao on the Empire of AI, AGI evangelists, and the cost of belief

September 14, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Vibe coding has turned senior devs into ‘AI babysitters,’ but they say it’s worth it
  • Anthropic Claude now has memory, catching up to competitors Gemini and ChatGPT
  • Karen Hao on the Empire of AI, AGI evangelists, and the cost of belief
  • Verizon gives customers another reason to be angry
  • DeepMind’s Demis Hassabis says calling AI PhD Intelligences is ‘Nonsense’

Recent Comments

  1. jnr mediamax 40000 disposable vape on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Rolandchere on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. vyvod iz zapoya 260 on Chinese Firms Have Placed $16B in Orders for Nvidia’s (NVDA) H20 AI Chips
  4. RobertWed on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Richardsmeap on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.