Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI’s Chinese competitor Zhipu unveils new open-source model

Mistral AI & Qualcomm partner will boost AI on Snapdragon devices

ChatGPT Has No Legal Privilege – Is This A Problem? – Artificial Lawyer

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
TechCrunch AI

Anthropic CEO wants to open the black box of AI models by 2027

By Advanced AI EditorApril 24, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Anthropic CEO Dario Amodei published an essay Thursday highlighting how little researchers understand about the inner workings of the world’s leading AI models. To address that, he’s set an ambitious goal for Anthropic to reliably detect most model problems by 2027.

Amodei acknowledges the challenge ahead. In “The Urgency of Interpretability,” the CEO says Anthropic has made early breakthroughs in tracing how models arrive at their answers — but emphasizes that far more research is needed to decode these systems as they grow more powerful.

“I am very concerned about deploying such systems without a better handle on interpretability,” Amodei wrote in the essay. “These systems will be absolutely central to the economy, technology, and national security, and will be capable of so much autonomy that I consider it basically unacceptable for humanity to be totally ignorant of how they work.”

Anthropic is one of the pioneering companies in mechanistic interpretability, a field that aims to open the black box of AI models and understand why they make the decisions they do. Despite the rapid performance improvements of the tech industry’s AI models, we still have relatively little idea how these systems arrive at decisions.

For example, OpenAI recently launched new reasoning AI models, o3 and o4-mini, that perform better on some tasks, but also hallucinate more than its other models. The company doesn’t know why it’s happening.

“When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate,” Amodei wrote in the essay.

Anthropic co-founder Chris Olah says that AI models are “grown more than they are built,” Amodei notes in the essay. In other words, AI researchers have found ways to improve AI model intelligence, but they don’t quite know why.

In the essay, Amodei says it could be dangerous to reach AGI — or as he calls it, “a country of geniuses in a data center” — without understanding how these models work. In a previous essay, Amodei claimed the tech industry could reach such a milestone by 2026 or 2027, but believes we’re much further out from fully understanding these AI models.

In the long term, Amodei says Anthropic would like to, essentially, conduct “brain scans” or “MRIs” of state-of-the-art AI models. These checkups would help identify a wide range of issues in AI models, including their tendencies to lie, seek power, or other weakness, he says. This could take five to ten years to achieve, but these measures will be necessary to test and deploy Anthropic’s future AI models, he added.

Anthropic has made a few research breakthroughs that have allowed it to better understand how its AI models work. For example, the company recently found ways to trace an AI model’s thinking pathways through, what the company call, circuits. Anthropic identified one circuit that helps AI models understand which U.S. cities are located in which U.S. states. The company has only found a few of these circuits, but estimates there are millions within AI models.

Anthropic has been investing in interpretability research itself, and recently made its first investment in a startup working on interpretability. In the essay, Amodei called on OpenAI and Google DeepMind to increase their research efforts in the field.

Amodei even calls on governments to impose “light-touch” regulations to encourage interpretability research, such as requirements for companies to disclose their safety and security practices. In the essay, Amodei also says the U.S. should put export controls on chips to China, in order to limit the likelihood of an out-of-control, global AI race.

Anthropic has always stood out from OpenAI and Google for its focus on safety. While other tech companies pushed back on California’s controversial AI safety bill, SB 1047, Anthropic issued modest support and recommendations for the bill, which would have set safety reporting standards for frontier AI model developers.

In this case, Anthropic seems to be pushing for an industry-wide effort to better understand AI models, not just increasing their capabilities.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleML News Live! (Dec 30, 2021) Anonymous user RIPS Tensorflw | AI prosecutors rising | Penny Challenge
Next Article Is that really your boss calling? Jericho Security raises $15M to stop deepfake fraud that’s cost businesses $200M in 2025 alone
Advanced AI Editor
  • Website

Related Posts

‘Wizard of Oz’ blown up by AI for giant Sphere screen

July 27, 2025

DOGE has built an AI tool to slash federal regulations

July 27, 2025

AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B

July 26, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

OpenAI’s Chinese competitor Zhipu unveils new open-source model

July 28, 2025

Mistral AI & Qualcomm partner will boost AI on Snapdragon devices

July 28, 2025

ChatGPT Has No Legal Privilege – Is This A Problem? – Artificial Lawyer

July 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI’s Chinese competitor Zhipu unveils new open-source model
  • Mistral AI & Qualcomm partner will boost AI on Snapdragon devices
  • ChatGPT Has No Legal Privilege – Is This A Problem? – Artificial Lawyer
  • Alibaba to launch AI-powered glasses creating a Chinese rival to Meta – NBC 6 South Florida
  • Nvidia Now Worth $4 Trillion — But Lawrence McDonald Warns Its AI Growth Depends On An Energy Sector 50 Times Smaller – NVIDIA (NASDAQ:NVDA)

Recent Comments

  1. binance推薦獎金 on [2407.11104] Exploring the Potentials and Challenges of Deep Generative Models in Product Design Conception
  2. психолог онлайн индивидуально on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. GeraldDes on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. binance sign up on Inclusion Strategies in Workplace | Recruiting News Network
  5. Rejestracja on Online Education – How I Make My Videos

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.