Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Google Gemma 3 : Comprehensive Guide to the New AI Model Family

Mistral AI introduces Code programming assistant

DeepWho? If You Missed DeepSeek’s Latest AI Launch, You’re Not Alone.

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Google’s latest AI model report lacks key safety details, experts say
TechCrunch AI

Google’s latest AI model report lacks key safety details, experts say

Advanced AI BotBy Advanced AI BotApril 18, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


On Thursday, weeks after launching its most powerful AI model yet, Gemini 2.5 Pro, Google published a technical report showing the results of its internal safety evaluations. However, the report is light on the details, experts say, making it difficult to determine which risks the model might pose.

Technical reports provide useful — and unflattering, at times — info that companies don’t always widely advertise about their AI. By and large, the AI community sees these reports as good-faith efforts to support independent research and safety evaluations.

Google takes a different safety reporting approach than some of its AI rivals, publishing technical reports only once it considers a model to have graduated from the “experimental” stage. The company also doesn’t include findings from all of its “dangerous capability” evaluations in these write-ups; it reserves those for a separate audit.

Several experts TechCrunch spoke with were still disappointed by the sparsity of the Gemini 2.5 Pro report, however, which they noted doesn’t mention Google’s Frontier Safety Framework (FSF). Google introduced the FSF last year in what it described as an effort to identify future AI capabilities that could cause “severe harm.”

“This [report] is very sparse, contains minimal information, and came out weeks after the model was already made available to the public,” Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, told TechCrunch. “It’s impossible to verify if Google is living up to its public commitments and thus impossible to assess the safety and security of their models.”

Thomas Woodside, co-founder of the Secure AI Project, said that while he’s glad Google released a report for Gemini 2.5 Pro, he’s not convinced of the company’s commitment to delivering timely supplemental safety evaluations. Woodside pointed out that the last time Google published the results of dangerous capability tests was in June 2024 — for a model announced in February that same year.

Not inspiring much confidence, Google hasn’t made available a report for Gemini 2.5 Flash, a smaller, more efficient model the company announced last week. A spokesperson told TechCrunch a report for Flash is “coming soon.”

“I hope this is a promise from Google to start publishing more frequent updates,” Woodside told TechCrunch. “Those updates should include the results of evaluations for models that haven’t been publicly deployed yet, since those models could also pose serious risks.”

Google may have been one of the first AI labs to propose standardized reports for models, but it’s not the only one that’s been accused of underdelivering on transparency lately. Meta released a similarly skimpy safety evaluation of its new Llama 4 open models, and OpenAI opted not to publish any report for its GPT-4.1 series.

Hanging over Google’s head are assurances the tech giant made to regulators to maintain a high standard of AI safety testing and reporting. Two years ago, Google told the U.S. government it would publish safety reports for all “significant” public AI models “within scope.” The company followed up that promise with similar commitments to other countries, pledging to “provide public transparency” around AI products.

Kevin Bankston, a senior adviser on AI governance at the Center for Democracy and Technology, called the trend of sporadic and vague reports a “race to the bottom” on AI safety.

“Combined with reports that competing labs like OpenAI have shaved their safety testing time before release from months to days, this meager documentation for Google’s top AI model tells a troubling story of a race to the bottom on AI safety and transparency as companies rush their models to market,” he told TechCrunch.

Google has said in statements that, while not detailed in its technical reports, it conducts safety testing and “adversarial red teaming” for models ahead of release.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous Article[ML News] AI models that write code (Copilot, CodeWhisperer, Pangu-Coder, etc.)
Next Article BigQuery is 5x bigger than Snowflake and Databricks: What Google is doing to make it even better
Advanced AI Bot
  • Website

Related Posts

Trump administration takes aim at Biden and Obama cybersecurity rules

June 7, 2025

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

June 7, 2025

Will Musk vs. Trump affect xAI’s $5 billion debt deal?

June 7, 2025
Leave A Reply Cancel Reply

Latest Posts

The Timeless Willie Nelson On Positive Thinking

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Midwestern Grotto Tradition Celebrated In Sheboygan, WI

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Latest Posts

Google Gemma 3 : Comprehensive Guide to the New AI Model Family

June 8, 2025

Mistral AI introduces Code programming assistant

June 8, 2025

DeepWho? If You Missed DeepSeek’s Latest AI Launch, You’re Not Alone.

June 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.