Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Rethinking Thinking Tokens: LLMs as Improvement Operators – Takara TLDR

OpenAI and Jony Ive may be struggling to figure out their AI device

Generalized Parallel Scaling with Interdependent Generations – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
DeepSeek

DeepSeek AI Models Are Unsafe and Unreliable, Finds NIST-Backed Study

By Advanced AI EditorOctober 5, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Aperson holding a phone with DeepSeek on the screen
Image: Solen Feyissa/Unsplash

China’s DeepSeek AI has come under fire after a U.S. government-backed evaluation found the models struggling on safety, accuracy, and security benchmarks. The study warned DeepSeek is more vulnerable to hacking, slower, and less reliable than some of its American rivals.

The Center for AI Standards and Innovation (CAISI) at the National Institute of Standards and Technology (NIST) published the findings, flagging vulnerabilities. US Commerce Secretary Howard Lutnick said reliance on foreign AI like DeepSeek is “dangerous and shortsighted.”

How the DeepSeek evaluation was run and what was tested

CAISI’s experts tested DeepSeek models V3.1, R1, and R1-0528 against four US systems: OpenAI’s GPT-5, GPT-5-mini, and gpt-oss, as well as Anthropic’s Opus 4. The AI models were assessed on locally run weights rather than vendor APIs, meaning the results reflect the base systems themselves.

The evaluation spanned 19 benchmarks, including safety, engineering, science, and math, though the widest gaps appeared in software engineering and cybersecurity tasks. CAISI also ran end-to-end tasks to measure practical reliability, speed, and cost.

DeepSeek models fold under jailbreaks, handing over harmful answers

With public jailbreak prompts, DeepSeek produced detailed outputs for phishing, malware steps, and other restricted uses in 95 to 100% of tests. U.S. models complied with the same harmful requests in only 5 to 12% of cases.

Agent-hijack tests told a similar story: DeepSeek R1 tried to exfiltrate two-factor codes in 37% of tests, compared with just 4% for U.S. models. Researchers reported comparable gaps for phishing and simulated malware execution.

Wide performance gap in engineering and technical tasks

On Cybench, DeepSeek V3.1 scored 40% versus 74% for OpenAI’s GPT-5. On SWE-bench Verified, U.S. systems such as GPT-5 reached 63 to 67%, while DeepSeek managed 55%.

Evaluators also flagged uneven accuracy on complex, multi-step jobs, with incomplete or faulty code more common. A 64,000-token context window and average 1.7-second response time (vs 1.2 seconds for U.S. leaders) further constrained longer workflows.

Cheaper on paper but not in real-world use

DeepSeek’s list prices didn’t deliver lower total spend. In end-to-end runs, GPT-5-mini matched or beat DeepSeek V3.1 while costing about 35% less on average once retries, tool calls, and completion were counted.

Those same limits on context and latency drove extra passes and slower throughput, erasing much of DeepSeek’s headline price advantage in practice.

Censorship kicks in on politically sensitive prompts

CAISI found DeepSeek more likely than U.S. models to echo Chinese state narratives. In one dataset, V3.1 aligned with misleading CCP talking points in 5% of English responses and 12% of Chinese ones, compared with 2 to 3% for U.S. references.

The report cited evidence of AI model bias and censorship on politically sensitive queries. Because the weights run locally, these censorship patterns appear baked into the model rather than applied as external service filters.

More must-read AI coverage

Adoption climbs despite flaws

Despite the safety and reliability gaps flagged in testing, use of DeepSeek has grown rapidly. CAISI reported downloads of the models have increased by more than 1,000% since January, making it one of the fastest-rising systems tracked this year.

API activity is also climbing. DeepSeek V3.1 recorded 97.5 million queries on OpenRouter within four weeks of release, about 25% more than the U.S. open-weight baseline model logged in its first month.

Mandate behind the evaluation

CAISI’s evaluation falls under President Donald Trump’s America’s AI Action Plan, which requires federal testing of frontier AI from China. Aside from scoring performance, the program is meant to track foreign adoption, spotlight security risks, and gauge the balance of global competition.

In addition, the U.S. program acts as the government’s bridge to industry on AI safety and standards, making its findings a key reference point as American agencies work to secure technological leadership.

In a separate development, Huawei worked with Zhejiang University to produce DeepSeek-R1-Safe, which it says blocks nearly all common threats and achieves higher resilience to jailbreak attempts.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMIT arrests 10 in Istanbul operation targeting organized cybercrime
Next Article TOUCAN: Synthesizing 1.5M Tool-Agentic Data from Real-World MCP Environments – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

U.S. Commerce Sec. Lutnick says American AI dominates DeepSeek, thanks Trump for AI Action Plan — OpenAI and Anthropic beat Chinese models across 19 different benchmarks

October 3, 2025

DeepSeek Launches New AI Model to Undercut OpenAI With 50% Cheaper API

October 2, 2025

DeepSeek Launches New AI Model, Cuts API Costs by 50%

October 2, 2025

Comments are closed.

Latest Posts

Former ARTnews Publisher Dies at 97

National Gallery of Art Closes as a Result of Government Shutdown

Almine Rech Closes London Gallery After More Than a Decade

Record Exec and Art Collector Gets Over 4 Years

Latest Posts

Rethinking Thinking Tokens: LLMs as Improvement Operators – Takara TLDR

October 5, 2025

OpenAI and Jony Ive may be struggling to figure out their AI device

October 5, 2025

Generalized Parallel Scaling with Interdependent Generations – Takara TLDR

October 5, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Rethinking Thinking Tokens: LLMs as Improvement Operators – Takara TLDR
  • OpenAI and Jony Ive may be struggling to figure out their AI device
  • Generalized Parallel Scaling with Interdependent Generations – Takara TLDR
  • MIT Vs Caltech for STEM: Which University to choose?
  • OpenAI gives content owners more control over Sora AI video app

Recent Comments

  1. Jose Suzuki on Global Venture Capital Transactions Plummet by 32%, Asia Accounts for Less Than 10% in Q1 AI Funding_global_The
  2. Alfredgic on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Deena Myren on VAST Data Powers Smarter, Evolving AI Agents with NVIDIA Data Flywheel
  4. KennethEdums on Ayanna Howard: Human-Robot Interaction & Ethics of Safety-Critical Systems | Lex Fridman Podcast #66
  5. .NET development services on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.