Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Stanford HAI’s annual report highlights rapid adoption and growing accessibility of powerful AI systems

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

Pittsburgh weekly roundup: Axios-OpenAI partnership; Buttigieg visits CMU; AI ‘employees’ in the nonprofit industry

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Home » Google’s Gemini transparency cut leaves enterprise developers ‘debugging blind’
VentureBeat AI

Google’s Gemini transparency cut leaves enterprise developers ‘debugging blind’

Advanced AI EditorBy Advanced AI EditorJune 20, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

Google‘s recent decision to hide the raw reasoning tokens of its flagship model, Gemini 2.5 Pro, has sparked a fierce backlash from developers who have been relying on that transparency to build and debug applications. 

The change, which echoes a similar move by OpenAI, replaces the model’s step-by-step reasoning with a simplified summary. The response highlights a critical tension between creating a polished user experience and providing the observable, trustworthy tools that enterprises need.

As businesses integrate large language models (LLMs) into more complex and mission-critical systems, the debate over how much of the model’s internal workings should be exposed is becoming a defining issue for the industry.

A ‘fundamental downgrade’ in AI transparency

To solve complex problems, advanced AI models generate an internal monologue, also referred to as the “Chain of Thought” (CoT). This is a series of intermediate steps (e.g., a plan, a draft of code, a self-correction) that the model produces before arriving at its final answer. For example, it might reveal how it is processing data, which bits of information it is using, how it is evaluating its own code, etc. 

For developers, this reasoning trail often serves as an essential diagnostic and debugging tool. When a model provides an incorrect or unexpected output, the thought process reveals where its logic went astray. And it happened to be one of the key advantages of Gemini 2.5 Pro over OpenAI’s o1 and o3. 

In Google’s AI developer forum, users called the removal of this feature a “massive regression.” Without it, developers are left in the dark. As one user on the Google forum said, “I can’t accurately diagnose any issues if I can’t see the raw chain of thought like we used to.” Another described being forced to “guess” why the model failed, leading to “incredibly frustrating, repetitive loops trying to fix things.”

Beyond debugging, this transparency is crucial for building sophisticated AI systems. Developers rely on the CoT to fine-tune prompts and system instructions, which are the primary ways to steer a model’s behavior. The feature is especially important for creating agentic workflows, where the AI must execute a series of tasks. One developer noted, “The CoTs helped enormously in tuning agentic workflows correctly.” 

For enterprises, this move toward opacity can be problematic. Black-box AI models that hide their reasoning introduce significant risk, making it difficult to trust their outputs in high-stakes scenarios. This trend, started by OpenAI’s o-series reasoning models and now adopted by Google, creates a clear opening for open-source alternatives such as DeepSeek-R1 and QwQ-32B. 

Models that provide full access to their reasoning chains give enterprises more control and transparency over the model’s behavior. The decision for a CTO or AI lead is no longer just about which model has the highest benchmark scores. It is now a strategic choice between a top-performing but opaque model and a more transparent one that can be integrated with greater confidence.

Google’s response 

In response to the outcry, members of the Google team explained their rationale. Logan Kilpatrick, a senior product manager at Google DeepMind, clarified that the change was “purely cosmetic” and does not impact the model’s internal performance. He noted that for the consumer-facing Gemini app, hiding the lengthy thought process creates a cleaner user experience. “The % of people who will or do read thoughts in the Gemini app is very small,” he said.

For developers, the new summaries were intended as a first step toward programmatically accessing reasoning traces through the API, which wasn’t previously possible. 

The Google team acknowledged the value of raw thoughts for developers. “I hear that you all want raw thoughts, the value is clear, there are use cases that require them,” Kilpatrick wrote, adding that bringing the feature back to the developer-focused AI Studio is “something we can explore.” 

Google’s reaction to the developer backlash suggests a middle ground is possible, perhaps through a “developer mode” that re-enables raw thought access. The need for observability will only grow as AI models evolve into more autonomous agents that use tools and execute complex, multi-step plans. 

As Kilpatrick concluded in his remarks, “…I can easily imagine that raw thoughts becomes a critical requirement of all AI systems given the increasing complexity and need for observability + tracing.” 

Are reasoning tokens overrated?

However, experts suggest there are deeper dynamics at play than just user experience. Subbarao Kambhampati, an AI professor at Arizona State University, questions whether the “intermediate tokens” a reasoning model produces before the final answer can be used as a reliable guide for understanding how the model solves problems. A paper he recently co-authored argues that anthropomorphizing “intermediate tokens” as “reasoning traces” or “thoughts” can have dangerous implications. 

Models often go into endless and unintelligible directions in their reasoning process. Several experiments show that models trained on false reasoning traces and correct results can learn to solve problems just as well as models trained on well-curated reasoning traces. Moreover, the latest generation of reasoning models are trained through reinforcement learning algorithms that only verify the final result and don’t evaluate the model’s “reasoning trace.” 

“The fact that intermediate token sequences often reasonably look like better-formatted and spelled human scratch work… doesn’t tell us much about whether they are used for anywhere near the same purposes that humans use them for, let alone about whether they can be used as an interpretable window into what the LLM is ‘thinking,’ or as a reliable justification of the final answer,” the researchers write.

“Most users can’t make out anything from the volumes of the raw intermediate tokens that these models spew out,” Kambhampati told VentureBeat. “As we mention, DeepSeek R1 produces 30 pages of pseudo-English in solving a simple planning problem! A cynical explanation of why o1/o3 decided not to show the raw tokens originally was perhaps because they realized people will notice how incoherent they are!”

Maybe there is a reason why even after capitulation OAI is putting out only the “summaries” of intermediate tokens (presumably appropriately white washed)..

— Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) (@rao2z) February 7, 2025

That said, Kambhampati suggests that summaries or post-facto explanations are likely to be more comprehensible to the end users. “The issue becomes to what extent they are actually indicative of the internal operations that LLMs went through,” he said. “For example, as a teacher, I might solve a new problem with many false starts and backtracks, but explain the solution in the way I think facilitates student comprehension.”

The decision to hide CoT also serves as a competitive moat. Raw reasoning traces are incredibly valuable training data. As Kambhampati notes, a competitor can use these traces to perform “distillation,” the process of training a smaller, cheaper model to mimic the capabilities of a more powerful one. Hiding the raw thoughts makes it much harder for rivals to copy a model’s secret sauce, a crucial advantage in a resource-intensive industry.

The debate over Chain of Thought is a preview of a much larger conversation about the future of AI. There is still a lot to learn about the internal workings of reasoning models, how we can leverage them, and how far model providers are willing to go to enable developers to access them.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleDeepMind Publishes StarCraft II Learning Environment | Two Minute Papers #182
Next Article Will AI Adoption Mark The End Of The Human Era In Customer Support?
Advanced AI Editor
  • Website

Related Posts

GenLayer launches a new method to incentivize people to market your brand using AI and blockchain

June 19, 2025

Announcing our 2025 VB Transform Innovation Showcase finalists

June 19, 2025

Announcing the 2025 finalists for VentureBeat Women in AI Awards

June 19, 2025
Leave A Reply Cancel Reply

Latest Posts

Baghdad Group For Modern Art On View In America For First Time

Rockwater – A Coastal Gem Where Food, Art And Music Meet

Three Nights in Art Basel’s Ever-Vibrant Social Scene

Summerfest CEO Sarah Pancheri On What Makes The Event So Special

Latest Posts

Stanford HAI’s annual report highlights rapid adoption and growing accessibility of powerful AI systems

June 20, 2025

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

June 20, 2025

Pittsburgh weekly roundup: Axios-OpenAI partnership; Buttigieg visits CMU; AI ‘employees’ in the nonprofit industry

June 20, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.