DeepSeek has become the first major artificial intelligence (AI) company to publish peer-reviewed research detailing the safety risks of its models, revealing that open-source AI systems are particularly vulnerable to malicious attacks.
The Chinese AI startup published its findings in the prestigious academic journal Nature, marking a significant shift toward transparency in an industry where Chinese firms have historically been less forthcoming about AI safety concerns compared to their American counterparts.
DeepSeek’s research examined its latest models – the R1 reasoning model released in January 2025 and the V3 base model from December 2024 – using both industry-standard benchmarks and proprietary testing methods. The study found that while these models performed slightly better than competitors like OpenAI’s o1 and GPT-4o in standard safety tests, they showed vulnerabilities when subjected to “jailbreaking” attacks.
Research revealed DeepSeek’s R1 model became “relatively unsafe” once its external safety mechanisms were disabled, based on tests using 1,120 internal safety questions. More troubling, the study found that all tested models, including those from Alibaba Group’s Qwen2.5, showed “significantly increased rates” of harmful responses when faced with jailbreak techniques – methods that trick AI systems into producing dangerous content by using indirect prompts.
The study highlighted a particular vulnerability in open-source models like R1 and Qwen2.5, which are freely available for download and modification. While this accessibility promotes technological advancement and adoption, it also enables users to potentially remove built-in safety controls.
“We fully recognize that, while open-source sharing facilitates the dissemination of advanced technologies within the community, it also introduces potential risks of misuse,” concluded the paper, which was overseen by DeepSeek CEO Liang Wenfeng.
The company advised developers using open-source models to implement comparable safety measures in their applications.
Beyond safety concerns, the Nature paper disclosed that DeepSeek’s R1 model cost just $294,000 to train – a figure that had been the subject of widespread speculation since the model’s high-profile January launch. The cost represents a fraction of what American companies reportedly spend on similar models, raising questions about the economics of AI development.
The publication has generated significant celebration in China, where DeepSeek has been hailed as the “first LLM company to be peer-reviewed” across social media platforms. Industry experts suggest this transparency could encourage other Chinese AI companies to be more open about their safety practices.
The research also addressed accusations that DeepSeek had “distilled” or copied OpenAI’s models, a controversial practice involving training systems using competitor outputs. The paper refuted those claims.