
China’s DeepSeek AI has come under fire after a U.S. government-backed evaluation found the models struggling on safety, accuracy, and security benchmarks. The study warned DeepSeek is more vulnerable to hacking, slower, and less reliable than some of its American rivals.
The Center for AI Standards and Innovation (CAISI) at the National Institute of Standards and Technology (NIST) published the findings, flagging vulnerabilities. US Commerce Secretary Howard Lutnick said reliance on foreign AI like DeepSeek is “dangerous and shortsighted.”
How the DeepSeek evaluation was run and what was tested
CAISI’s experts tested DeepSeek models V3.1, R1, and R1-0528 against four US systems: OpenAI’s GPT-5, GPT-5-mini, and gpt-oss, as well as Anthropic’s Opus 4. The AI models were assessed on locally run weights rather than vendor APIs, meaning the results reflect the base systems themselves.
The evaluation spanned 19 benchmarks, including safety, engineering, science, and math, though the widest gaps appeared in software engineering and cybersecurity tasks. CAISI also ran end-to-end tasks to measure practical reliability, speed, and cost.
DeepSeek models fold under jailbreaks, handing over harmful answers
With public jailbreak prompts, DeepSeek produced detailed outputs for phishing, malware steps, and other restricted uses in 95 to 100% of tests. U.S. models complied with the same harmful requests in only 5 to 12% of cases.
Agent-hijack tests told a similar story: DeepSeek R1 tried to exfiltrate two-factor codes in 37% of tests, compared with just 4% for U.S. models. Researchers reported comparable gaps for phishing and simulated malware execution.
Wide performance gap in engineering and technical tasks
On Cybench, DeepSeek V3.1 scored 40% versus 74% for OpenAI’s GPT-5. On SWE-bench Verified, U.S. systems such as GPT-5 reached 63 to 67%, while DeepSeek managed 55%.
Evaluators also flagged uneven accuracy on complex, multi-step jobs, with incomplete or faulty code more common. A 64,000-token context window and average 1.7-second response time (vs 1.2 seconds for U.S. leaders) further constrained longer workflows.
Cheaper on paper but not in real-world use
DeepSeek’s list prices didn’t deliver lower total spend. In end-to-end runs, GPT-5-mini matched or beat DeepSeek V3.1 while costing about 35% less on average once retries, tool calls, and completion were counted.
Those same limits on context and latency drove extra passes and slower throughput, erasing much of DeepSeek’s headline price advantage in practice.
Censorship kicks in on politically sensitive prompts
CAISI found DeepSeek more likely than U.S. models to echo Chinese state narratives. In one dataset, V3.1 aligned with misleading CCP talking points in 5% of English responses and 12% of Chinese ones, compared with 2 to 3% for U.S. references.
The report cited evidence of AI model bias and censorship on politically sensitive queries. Because the weights run locally, these censorship patterns appear baked into the model rather than applied as external service filters.
More must-read AI coverage
Adoption climbs despite flaws
Despite the safety and reliability gaps flagged in testing, use of DeepSeek has grown rapidly. CAISI reported downloads of the models have increased by more than 1,000% since January, making it one of the fastest-rising systems tracked this year.
API activity is also climbing. DeepSeek V3.1 recorded 97.5 million queries on OpenRouter within four weeks of release, about 25% more than the U.S. open-weight baseline model logged in its first month.
Mandate behind the evaluation
CAISI’s evaluation falls under President Donald Trump’s America’s AI Action Plan, which requires federal testing of frontier AI from China. Aside from scoring performance, the program is meant to track foreign adoption, spotlight security risks, and gauge the balance of global competition.
In addition, the U.S. program acts as the government’s bridge to industry on AI safety and standards, making its findings a key reference point as American agencies work to secure technological leadership.
In a separate development, Huawei worked with Zhejiang University to produce DeepSeek-R1-Safe, which it says blocks nearly all common threats and achieves higher resilience to jailbreak attempts.