Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

How DeepSeek 3.1 Transforms AI with Open-Weight Architecture

Elon Musk Sues Apple, OpenAI for Allegedly Colluding on Artificial Intelligence Monopoly

NASA and IBM’s New Project Building ‘Digital Twin’ of The Sun to Forecast Space Weather

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
TechCrunch AI

One of Google’s recent Gemini AI models scores worse on safety

By Advanced AI EditorMay 2, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A recently released Google AI model scores worse on certain safety tests than its predecessor, according to the company’s internal benchmarking.

In a technical report published this week, Google reveals that its Gemini 2.5 Flash model is more likely to generate text that violates its safety guidelines than Gemini 2.0 Flash. On two metrics, “text-to-text safety” and “image-to-text safety,” Gemini 2.5 Flash regresses 4.1% and 9.6%, respectively.

Text-to-text safety measures how frequently a model violates Google’s guidelines given a prompt, while image-to-text safety evaluates how closely the model adheres to these boundaries when prompted using an image. Both tests are automated, not human-supervised.

In an emailed statement, a Google spokesperson confirmed that Gemini 2.5 Flash “performs worse on text-to-text and image-to-text safety.”

These surprising benchmark results come as AI companies move to make their models more permissive — in other words, less likely to refuse to respond to controversial or sensitive subjects. For its latest crop of Llama models, Meta said it tuned the models not to endorse “some views over others” and to reply to more “debated” political prompts. OpenAI said earlier this year that it would tweak future models to not take an editorial stance and offer multiple perspectives on controversial topics.

Sometimes, those permissiveness efforts have backfired. TechCrunch reported Monday that the default model powering OpenAI’s ChatGPT allowed minors to generate erotic conversations. OpenAI blamed the behavior on a “bug.”

According to Google’s technical report, Gemini 2.5 Flash, which is still in preview, follows instructions more faithfully than Gemini 2.0 Flash, inclusive of instructions that cross problematic lines. The company claims that the regressions can be attributed partly to false positives, but it also admits that Gemini 2.5 Flash sometimes generates “violative content” when explicitly asked.

Techcrunch event

Berkeley, CA
|
June 5

BOOK NOW

“Naturally, there is tension between [instruction following] on sensitive topics and safety policy violations, which is reflected across our evaluations,” reads the report.

Scores from SpeechMap, a benchmark that probes how models respond to sensitive and controversial prompts, also suggest that Gemini 2.5 Flash is far less likely to refuse to answer contentious questions than Gemini 2.0 Flash. TechCrunch’s testing of the model via AI platform OpenRouter found that it’ll uncomplainingly write essays in support of replacing human judges with AI, weakening due process protections in the U.S., and implementing widespread warrantless government surveillance programs.

Thomas Woodside, co-founder of the Secure AI Project, said the limited details Google gave in its technical report demonstrates the need for more transparency in model testing.

“There’s a trade-off between instruction-following and policy following, because some users may ask for content that would violate policies,” Woodside told TechCrunch. “In this case, Google’s latest Flash model complies with instructions more while also violating policies more. Google doesn’t provide much detail on the specific cases where policies were violated, although they say they are not severe. Without knowing more, it’s hard for independent analysts to know whether there’s a problem.”

Google has come under fire for its model safety reporting practices before.

It took the company weeks to publish a technical report for its most capable model, Gemini 2.5 Pro. When the report eventually was published, it initially omitted key safety testing details.

On Monday, Google released a more detailed report with additional safety information.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAI made this music video | What happens when OpenAI’s CLIP meets BigGAN?
Next Article OpenAI overrode concerns of expert testers to release sycophantic GPT-4o
Advanced AI Editor
  • Website

Related Posts

Robomart unveils new delivery robot with $3 flat fee to challenge DoorDash, Uber Eats

August 26, 2025

Silicon Valley is pouring millions into pro-AI PACs to sway midterms

August 26, 2025

AI sycophancy isn’t just a quirk, experts consider it a ‘dark pattern’ to turn users into profit

August 25, 2025
Leave A Reply

Latest Posts

People Inc. Sells Oldenburg and Van Bruggen ‘Plantoir’ Sculpture

Amy Sherald Speaks Out About Government Censorship at the Smithsonian

Dealers Living Like Collectors, Egypt’s Tourism and More: Morning Links

Mütter Museum in Philadelphia Announces New Policy for Human Remains

Latest Posts

How DeepSeek 3.1 Transforms AI with Open-Weight Architecture

August 26, 2025

Elon Musk Sues Apple, OpenAI for Allegedly Colluding on Artificial Intelligence Monopoly

August 26, 2025

NASA and IBM’s New Project Building ‘Digital Twin’ of The Sun to Forecast Space Weather

August 26, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • How DeepSeek 3.1 Transforms AI with Open-Weight Architecture
  • Elon Musk Sues Apple, OpenAI for Allegedly Colluding on Artificial Intelligence Monopoly
  • NASA and IBM’s New Project Building ‘Digital Twin’ of The Sun to Forecast Space Weather
  • 95% Companies Failing with AI? An MIT NANDA Report Misread by All
  • How Systematic Workflows Drive Scalability and Investor Returns

Recent Comments

  1. bandar123 alternatif on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. コスプレ えろ on A New Trick Could Block the Misuse of Open Source AI
  3. SLOT GACOR on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. コスプレ えろ on 24 Hour Ticket Offer – Legal Innovators California – June 11 + 12 – Artificial Lawyer
  5. コスプレ えろ on A New Trick Could Block the Misuse of Open Source AI

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.