Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Multi-View 3D Point Tracking – Takara TLDR

Claude.AI teaches users ins, outs of programming code | The Arkansas Democrat-Gazette

IBM to debut Andhra quantum computer by March

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Google DeepMind

British lawmakers accuse Google DeepMind of ‘breach of trust’ over delayed Gemini 2.5 Pro safety report

By Advanced AI EditorAugust 30, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A group of 60 U.K. lawmakers has signed an open letter accusing Google DeepMind of violating its commitments to AI safety with the release of Gemini 2.5 Pro. The letter, published by political activist group PauseAI, accuses the AI company of breaking the Frontier AI Safety Commitments it signed at an international summit in 2024 by not releasing the AI model with key safety information.

At an international summit cohosted by the U.K. and South Korea in February 2024, Google and other signatories promised to “publicly report” their models’ capabilities and risk assessments, as well as disclose whether outside organizations, such as government AI safety institutes, had been involved in testing.

However, when the company released Gemini 2.5 Pro in March 2025, the company failed to publish a model card, the document that details key information about how models are tested and built. This was despite the company’s assertions that the new model outperformed competitors on industry benchmarks by “meaningful margins.” Instead, the AI lab released a simplified six-page model card three weeks after it first made the model publicly available as a “preview” version. At the time, one AI governance expert called this report “meager” and “worrisome.”

The letter called Google’s delay a “failure to honour” the company’s commitment at the summit and “a troubling breach of trust with governments and the public.” The letter also took issue with what it called a “minimal ‘model card’” that lacked “any substantive detail about external evaluations,” as well as Google’s refusal to confirm whether government agencies like the U.K. AI Security Institute (AISI) participated in testing.

In a statement sent to Fortune on Friday, a spokesperson for Google DeepMind said the company stands by its “transparent and testing and reporting processes” and was fulfilling its public commitments, including the Seoul Frontier AI Safety Commitments.

“As part of our development process, our models undergo rigorous safety checks, including by U.K. AISI and other third-party testers—and Gemini 2.5 is no exception,” the statement said.

When Google first released the preview version of Gemini 2.5 Pro, critics said that the missing system card appeared to violate several other pledges the AI company had made, including the 2023 White House Commitments and a voluntary Code of Conduct on Artificial Intelligence signed in October 2023.

The company had said in May that a more detailed “technical report” would come later when it makes a final version of the Gemini 2.5 Pro “model family” fully available to the public. The company appeared to provide a longer report in late June, months after the full version was released.

Google isn’t the only company to sign these pledges and then appear to pull back on safety disclosures. Meta’s model card for its frontier Llama 4 model was about as brief and limited in detail as the one Google released for Gemini 2.5 Pro, and it, too, drew criticism from AI safety researchers.

Earlier this year, OpenAI announced it would not publish a technical safety report for its new GPT-4.1 model. The company argued that GPT-4.1 is “not a frontier model,” since its reasoning-focused systems like o3 and o4-mini outperform it on many benchmarks.

The recent letter calls on Google to reaffirm its commitment to AI safety, asking the tech company to define deployment clearly as the point when a model becomes publicly accessible; commit to publishing safety evaluation reports on a set timeline for all future model releases; and provide full transparency for each release by naming the government agencies and independent third parties involved in testing, along with the exact testing timelines.

“If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards,” Lord Browne of Ladyton, a member of the House of Lords and one of the letter’s signatories, said in a statement.

This story was originally featured on Fortune.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSoftware is 40% of security budgets as CISOs shift to AI defense
Next Article USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learning – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

Nano Banana: Google DeepMind upgrades Gemini app with advanced AI image editing model

August 29, 2025

Summit With OpenAI, Google DeepMind Reaches Bleak Agreement

August 29, 2025

Google DeepMind’s product director Dave Citron joins Microsoft as new corporate VP; gives Day 1 report on LinkedIn

August 28, 2025

Comments are closed.

Latest Posts

Woodmere Art Museum Sues Trump Administration Over Canceled IMLS Grant

Barbara Gladstone’s Chelsea Townhouse in NYC Sells for $13.1 M.

Trump Meets with Smithsonian Leader Amid Threats of Content Review

Australian School Faces Pushback over AI Art Course—and More Art News

Latest Posts

Multi-View 3D Point Tracking – Takara TLDR

August 30, 2025

Claude.AI teaches users ins, outs of programming code | The Arkansas Democrat-Gazette

August 30, 2025

IBM to debut Andhra quantum computer by March

August 30, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Multi-View 3D Point Tracking – Takara TLDR
  • Claude.AI teaches users ins, outs of programming code | The Arkansas Democrat-Gazette
  • IBM to debut Andhra quantum computer by March
  • Social-MAE: A Transformer-Based Multimodal Autoencoder for Face and Voice – Takara TLDR
  • How Grok, ChatGPT, Claude, Perplexity, and Gemini handle your data for AI training

Recent Comments

  1. Danielcet on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. primary 4 math tuition on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Tejapenar on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. AndrewStync on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Danielcet on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.