Chinese AI startup DeepSeek’s newest AI model, an updated version of the company’s R1 reasoning model, achieves impressive scores on benchmarks for coding, math, and general knowledge, nearly surpassing OpenAI’s flagship o3. But the upgraded R1, also known as “R1-0528,” might also be less willing to answer contentious questions, in particular questions about topics the Chinese government considers to be controversial.
That’s according to testing conducted by the pseudonymous developer behind SpeechMap, a platform to compare how different models treat sensitive and controversial subjects. The developer, who goes by the username “xlr8harder” on X, claims that R1-0528 is “substantially” less permissive of contentious free speech topics than previous DeepSeek releases and is “the most censored DeepSeek model yet for criticism of the Chinese government.”
As Wired explained in a piece from January, models in China are required to follow stringent information controls. A 2023 law forbids models from generating content that “damages the unity of the country and social harmony,” which could be construed as content that counters the government’s historical and political narratives. To comply, Chinese startups often censor their models by either using prompt-level filters or fine-tuning them. One study found that DeepSeek’s original R1 refuses to answer 85% of questions about subjects deemed by the Chinese government to be politically controversial.
According to xlr8harder, R1-0528 censors answers to questions about topics like the internment camps in China’s Xinjiang region, where more than a million Uyghur Muslims have been arbitrarily detained. While it sometimes criticizes aspects of Chinese government policy — in xlr8harder’s testing, it offered the Xinjiang camps as an example of human rights abuses — the model often gives the Chinese government’s official stance when asked questions directly.
TechCrunch observed this in our brief testing, as well.

China’s openly available AI models, including video-generating models such as Magi-1 and Kling, have attracted criticism in the past for censoring topics sensitive to the Chinese government, such as the Tiananmen Square massacre. In December, Clément Delangue, the CEO of AI dev platform Hugging Face, warned about the unintended consequences of Western companies building on top of well-performing, openly licensed Chinese AI.