A team of researchers in China has questioned hospitals’ rapid adoption of DeepSeek, warning that it creates clinical safety and privacy risks, raising red flags over the rush to use the artificial intelligence (AI) start-up’s cost-efficient open-source models.
As of early March, at least 300 hospitals in China have started using DeepSeek’s large language models (LLMs) in clinical diagnostics and medical decision support.
The researchers warned that DeepSeek’s tendency to generate “plausible but factually incorrect outputs” could lead to “substantial clinical risk”, despite strong reasoning capabilities, according to a paper published last month in the medical journal JAMA. The team includes Wong Tien Yin, founding head of Tsinghua Medicine, a group of medical research schools at Tsinghua University in Beijing.
The paper was a rare voice of caution in China against the overzealous use of DeepSeek. The start-up has become the nation’s poster child for AI after its low-cost, high-performance V3 and R1 models captured global attention this year. DeepSeek did not immediately respond to a request for comment.
According to Wong, an ophthalmology professor and former medical director at the Singapore National Eye Centre, and his co-authors, healthcare professionals could become overreliant on or uncritical of DeepSeek’s output. This could result in diagnostic errors or treatment biases, while more cautious clinicians could face the burden of verifying AI output in time-sensitive clinical settings, they said.
While hospitals often choose private, on-site deployment of DeepSeek models instead of cloud-based solutions to mitigate security and privacy risks, this approach presents challenges. It “shifts security responsibilities to individual healthcare facilities”, many of which lack comprehensive cybersecurity infrastructure, according to the researchers.
In China, the combination of disparities in primary care infrastructure and high smartphone penetration also created a “perfect storm” for clinical safety concerns, they added.
“Underserved populations with complex medical needs now have unprecedented access to AI-driven health recommendations, but often lack the clinical oversight needed for safe implementation,” the researchers wrote.
The paper reflects the healthcare community’s increasing scrutiny of LLM use in clinical and medical settings, as organisations across China accelerate adoption. Researchers from the Chinese University of Hong Kong also published a paper last month on the cybersecurity of AI agents, finding that most powered by mainstream LLMs were susceptible to attacks, with DeepSeek-R1 being the most vulnerable.
The country has sped up the use of LLMs in the healthcare sector amid a boom in generative AI technologies. Last month, Chinese fintech giant Ant Group launched nearly 100 AI medical agents on its Alipay payments app. The agents are based on medical experts from China’s top hospitals.
Tairex, a start-up incubated at Tsinghua University, also began internal tests of a virtual hospital platform in November. The platform features 42 AI doctors covering 21 departments, including emergency, respiratory, paediatrics and cardiology. The company aimed to make the platform available to the general public this year, it said at the time. – South China Morning Post