Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

$750 Target Stays as Analysts Expect AI Gaps to Close

A.I. May Be the Future, but First It Has to Study Ancient Roman History

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Stop guessing why your LLMs break: Anthropic’s new tool shows you exactly what goes wrong

By Advanced AI EditorJune 4, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Large language models (LLMs) are transforming how enterprises operate, but their “black box” nature often leaves enterprises grappling with unpredictability. Addressing this critical challenge, Anthropic recently open-sourced its circuit tracing tool, allowing developers and researchers to directly understand and control models’ inner workings. 

This tool allows investigators to investigate unexplained errors and unexpected behaviors in open-weight models. It can also help with granular fine-tuning of LLMs for specific internal functions.

Understanding the AI’s inner logic

This circuit tracing tool works based on “mechanistic interpretability,” a burgeoning field dedicated to understanding how AI models function based on their internal activations rather than merely observing their inputs and outputs. 

While Anthropic’s initial research on circuit tracing applied this methodology to their own Claude 3.5 Haiku model, the open-sourced tool extends this capability to open-weights models. Anthropic’s team has already used the tool to trace circuits in models like Gemma-2-2b and Llama-3.2-1b and has released a Colab notebook that helps use the library on open models.

The core of the tool lies in generating attribution graphs, causal maps that trace the interactions between features as the model processes information and generates an output. (Features are internal activation patterns of the model that can be roughly mapped to understandable concepts.) It is like obtaining a detailed wiring diagram of an AI’s internal thought process. More importantly, the tool enables “intervention experiments,” allowing researchers to directly modify these internal features and observe how changes in the AI’s internal states impact its external responses, making it possible to debug models.

The tool integrates with Neuronpedia, an open platform for understanding and experimentation with neural networks. 

Circuite tracing on Neuronpedia (source: Anthropic blog)
Circuit tracing on Neuronpedia (source: Anthropic blog)

Practicalities and future impact for enterprise AI

While Anthropic’s circuit tracing tool is a great step toward explainable and controllable AI, it has practical challenges, including high memory costs associated with running the tool and the inherent complexity of interpreting the detailed attribution graphs.

However, these challenges are typical of cutting-edge research. Mechanistic interpretability is a big area of research, and most big AI labs are developing models to investigate the inner workings of large language models. By open-sourcing the circuit tracing tool, Anthropic will enable the community to develop interpretability tools that are more scalable, automated, and accessible to a wider array of users, opening the way for practical applications of all the effort that is going into understanding LLMs. 

As the tooling matures, the ability to understand why an LLM makes a certain decision can translate into practical benefits for enterprises. 

Circuit tracing explains how LLMs perform sophisticated multi-step reasoning. For example, in their study, the researchers were able to trace how a model inferred “Texas” from “Dallas” before arriving at “Austin” as the capital. It also revealed advanced planning mechanisms, like a model pre-selecting rhyming words in a poem to guide line composition. Enterprises can use these insights to analyze how their models tackle complex tasks like data analysis or legal reasoning. Pinpointing internal planning or reasoning steps allows for targeted optimization, improving efficiency and accuracy in complex business processes.

Source: Anthropic

Furthermore, circuit tracing offers better clarity into numerical operations. For example, in their study, the researchers uncovered how models handle arithmetic, like 36+59=95, not through simple algorithms but via parallel pathways and “lookup table” features for digits. For example, enterprises can use such insights to audit internal computations leading to numerical results, identify the origin of errors and implement targeted fixes to ensure data integrity and calculation accuracy within their open-source LLMs.

For global deployments, the tool provides insights into multilingual consistency. Anthropic’s previous research shows that models employ both language-specific and abstract, language-independent “universal mental language” circuits, with larger models demonstrating greater generalization. This can potentially help debug localization challenges when deploying models across different languages.

Finally, the tool can help combat hallucinations and improve factual grounding. The research revealed that models have “default refusal circuits” for unknown queries, which are suppressed by “known answer” features. Hallucinations can occur when this inhibitory circuit “misfires.” 

Source: Anthropic

Beyond debugging existing issues, this mechanistic understanding unlocks new avenues for fine-tuning LLMs. Instead of merely adjusting output behavior through trial and error, enterprises can identify and target the specific internal mechanisms driving desired or undesired traits. For instance, understanding how a model’s “Assistant persona” inadvertently incorporates hidden reward model biases, as shown in Anthropic’s research, allows developers to precisely re-tune the internal circuits responsible for alignment, leading to more robust and ethically consistent AI deployments.

As LLMs increasingly integrate into critical enterprise functions, their transparency, interpretability and control become increasingly critical. This new generation of tools can help bridge the gap between AI’s powerful capabilities and human understanding, building foundational trust and ensuring that enterprises can deploy AI systems that are reliable, auditable, and aligned with their strategic objectives.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleReddit sues Anthropic for allegedly not paying for training data
Next Article Protein Industries Canada Targets Supply Chain Resilience with $15 Million in Genomics and AI Funding – vegconomist
Advanced AI Editor
  • Website

Related Posts

Why AI is making us lose our minds (and not in the way you’d think)

July 26, 2025

Shengjia Zhao named Meta Superintelligence Chief Scientist

July 26, 2025

New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

July 26, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

$750 Target Stays as Analysts Expect AI Gaps to Close

July 27, 2025

A.I. May Be the Future, but First It Has to Study Ancient Roman History

July 27, 2025

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • $750 Target Stays as Analysts Expect AI Gaps to Close
  • A.I. May Be the Future, but First It Has to Study Ancient Roman History
  • OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News
  • This Indian With IIT, MIT Degree Could Have Received Rs 800 Crore Joining Bonus Ast Meta! – Trak.in
  • Beijing Is Using Soft Power to Gain Global Dominance

Recent Comments

  1. Rejestracja on Online Education – How I Make My Videos
  2. Anonymous on AI, CEOs, and the Wild West of Streaming
  3. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  4. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.