Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Meta’s Llama AI Team Suffers Talent Exodus As Top Researchers Join $2B Mistral AI, Backed By Andreessen Horowitz And Salesforce

Reddit Sues Anthropic for Scraping Content to Train Claude AI

Google DeepMind’s CEO Thinks AI Will Make Humans Less Selfish

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Google’s Gemini 2.5 Flash introduces ‘thinking budgets’ that cut AI costs by 600% when turned down
VentureBeat AI

Google’s Gemini 2.5 Flash introduces ‘thinking budgets’ that cut AI costs by 600% when turned down

Advanced AI BotBy Advanced AI BotApril 18, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Google has launched Gemini 2.5 Flash, a major upgrade to its AI lineup that gives businesses and developers unprecedented control over how much “thinking” their AI performs. The new model, released today in preview through Google AI Studio and Vertex AI, represents a strategic effort to deliver improved reasoning capabilities while maintaining competitive pricing in the increasingly crowded AI market.

The model introduces what Google calls a “thinking budget” — a mechanism that allows developers to specify how much computational power should be allocated to reasoning through complex problems before generating a response. This approach aims to address a fundamental tension in today’s AI marketplace: more sophisticated reasoning typically comes at the cost of higher latency and pricing.

“We know cost and latency matter for a number of developer use cases, and so we want to offer developers the flexibility to adapt the amount of the thinking the model does, depending on their needs,” said Tulsee Doshi, Product Director for Gemini Models at Google DeepMind, in an exclusive interview with VentureBeat.

This flexibility reveals Google’s pragmatic approach to AI deployment as the technology increasingly becomes embedded in business applications where cost predictability is essential. By allowing the thinking capability to be turned on or off, Google has created what it calls its “first fully hybrid reasoning model.”

Pay only for the brainpower you need: Inside Google’s new AI pricing model

The new pricing structure highlights the cost of reasoning in today’s AI systems. When using Gemini 2.5 Flash, developers pay $0.15 per million tokens for input. Output costs vary dramatically based on reasoning settings: $0.60 per million tokens with thinking turned off, jumping to $3.50 per million tokens with reasoning enabled.

This nearly sixfold price difference for reasoned outputs reflects the computational intensity of the “thinking” process, where the model evaluates multiple potential paths and considerations before generating a response.

“Customers pay for any thinking and output tokens the model generates,” Doshi told VentureBeat. “In the AI Studio UX, you can see these thoughts before a response. In the API, we currently don’t provide access to the thoughts, but a developer can see how many tokens were generated.”

The thinking budget can be adjusted from 0 to 24,576 tokens, operating as a maximum limit rather than a fixed allocation. According to Google, the model intelligently determines how much of this budget to use based on the complexity of the task, preserving resources when elaborate reasoning isn’t necessary.

How Gemini 2.5 Flash stacks up: Benchmark results against leading AI models

Google claims Gemini 2.5 Flash demonstrates competitive performance across key benchmarks while maintaining a smaller model size than alternatives. On Humanity’s Last Exam, a rigorous test designed to evaluate reasoning and knowledge, 2.5 Flash scored 12.1%, outperforming Anthropic’s Claude 3.7 Sonnet (8.9%) and DeepSeek R1 (8.6%), though falling short of OpenAI’s recently launched o4-mini (14.3%).

The model also posted strong results on technical benchmarks like GPQA diamond (78.3%) and AIME mathematics exams (78.0% on 2025 tests and 88.0% on 2024 tests).

“Companies should choose 2.5 Flash because it provides the best value for its cost and speed,” Doshi said. “It’s particularly strong relative to competitors on math, multimodal reasoning, long context, and several other key metrics.”

Industry analysts note that these benchmarks indicate Google is narrowing the performance gap with competitors while maintaining a pricing advantage — a strategy that may resonate with enterprise customers watching their AI budgets.

Smart vs. speedy: When does your AI need to think deeply?

The introduction of adjustable reasoning represents a significant evolution in how businesses can deploy AI. With traditional models, users have little visibility into or control over the model’s internal reasoning process.

Google’s approach allows developers to optimize for different scenarios. For simple queries like language translation or basic information retrieval, thinking can be disabled for maximum cost efficiency. For complex tasks requiring multi-step reasoning, such as mathematical problem-solving or nuanced analysis, the thinking function can be enabled and fine-tuned.

A key innovation is the model’s ability to determine how much reasoning is appropriate based on the query. Google illustrates this with examples: a simple question like “How many provinces does Canada have?” requires minimal reasoning, while a complex engineering question about beam stress calculations would automatically engage deeper thinking processes.

“Integrating thinking capabilities into our mainline Gemini models, combined with improvements across the board, has led to higher quality answers,” Doshi said. “These improvements are true across academic benchmarks – including SimpleQA, which measures factuality.”

Google’s AI week: Free student access and video generation join the 2.5 Flash launch

The release of Gemini 2.5 Flash comes during a week of aggressive moves by Google in the AI space. On Monday, the company rolled out Veo 2 video generation capabilities to Gemini Advanced subscribers, allowing users to create eight-second video clips from text prompts. Today, alongside the 2.5 Flash announcement, Google revealed that all U.S. college students will receive free access to Gemini Advanced until spring 2026 — a move interpreted by analysts as an effort to build loyalty among future knowledge workers.

These announcements reflect Google’s multi-pronged strategy to compete in a market dominated by OpenAI’s ChatGPT, which reportedly sees over 800 million weekly users compared to Gemini’s estimated 250-275 million monthly users, according to third-party analyses.

The 2.5 Flash model, with its explicit focus on cost efficiency and performance customization, appears designed to appeal particularly to enterprise customers who need to carefully manage AI deployment costs while still accessing advanced capabilities.

“We’re super excited to start getting feedback from developers about what they’re building with Gemini Flash 2.5 and how they’re using thinking budgets,” Doshi said.

Beyond the preview: What businesses can expect as Gemini 2.5 Flash matures

While this release is in preview, the model is already available for developers to start building with, though Google has not specified a timeline for general availability. The company indicates it will continue refining the dynamic thinking capabilities based on developer feedback during this preview phase.

For enterprise AI adopters, this release represents an opportunity to experiment with more nuanced approaches to AI deployment, potentially allocating more computational resources to high-stakes tasks while conserving costs on routine applications.

The model is also available to consumers through the Gemini app, where it appears as “2.5 Flash (Experimental)” in the model dropdown menu, replacing the previous 2.0 Thinking (Experimental) option. This consumer-facing deployment suggests Google is using the app ecosystem to gather broader feedback on its reasoning architecture.

As AI becomes increasingly embedded in business workflows, Google’s approach with customizable reasoning reflects a maturing market where cost optimization and performance tuning are becoming as important as raw capabilities — signaling a new phase in the commercialization of generative AI technologies.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI pursued Cursor maker before entering into talks to buy Windsurf for $3B
Next Article Looking At The Paintings Of Lisa Yuskavage, And Seeing
Advanced AI Bot
  • Website

Related Posts

Agent-based computing is outgrowing the web as we know it

June 7, 2025

Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions

June 6, 2025

Voice AI that actually converts: New TTS model boosts sales 15% for major brands

June 6, 2025
Leave A Reply Cancel Reply

Latest Posts

16 Iconic Wild Animals Photos Celebrating Remembering Wildlife

The Timeless Willie Nelson On Positive Thinking

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Midwestern Grotto Tradition Celebrated In Sheboygan, WI

Latest Posts

Meta’s Llama AI Team Suffers Talent Exodus As Top Researchers Join $2B Mistral AI, Backed By Andreessen Horowitz And Salesforce

June 8, 2025

Reddit Sues Anthropic for Scraping Content to Train Claude AI

June 8, 2025

Google DeepMind’s CEO Thinks AI Will Make Humans Less Selfish

June 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.