Chinese AI lab DeepSeek is once again under scrutiny following the release of its latest model, R1-0528, an updated version of its R1 reasoning model.
Launched last week, the new model has demonstrated impressive performance in mathematical reasoning and code generation tasks, but it’s the source of its training data that’s raising eyebrows.
Allegations of Training on Gemini Outputs
While DeepSeek has remained tight-lipped about the dataset it used to train R1-0528, the developer community is buzzing with theories that Google’s Gemini may have played a role.
Melbourne-based developer Sam Paech, known for building “emotional intelligence” assessments for AI systems, shared what he describes as signs that the DeepSeek model was trained on Gemini outputs.
According to his post on X, R1-0528 frequently favors “words and expressions similar to those that Google’s Gemini 2.5 Pro favors.”
If you’re wondering why new deepseek r1 sounds a bit different, I think they probably switched from training on synthetic openai to synthetic gemini outputs. pic.twitter.com/Oex9roapNv
— Sam Paech (@sam_paech) May 29, 2025
While this alone isn’t definitive proof, it has been enough to ignite a fresh wave of scrutiny. Another developer, the anonymous creator of SpeechMap, a “free speech eval” tool for AI, noted that the “thoughts” the model generates as it works toward a conclusion “read like Gemini traces.”
This isn’t the first time critics have accused DeepSeek of questionable data sourcing. Back in December, users observed that DeepSeek’s V3 model sometimes identified itself as ChatGPT, suggesting that it may have absorbed data from OpenAI’s chatbot.
Earlier this year, OpenAI told the Financial Times that it discovered evidence pointing to DeepSeek’s use of distillation, a process of training a smaller AI using outputs from more capable models. According to Bloomberg, Microsoft, a major OpenAI partner, flagged unusual data extraction from developer accounts linked to OpenAI, which it suspected were affiliated with DeepSeek. These incidents reportedly took place in late 2024.
While distillation is a common technique in AI development, OpenAI’s terms of service explicitly prohibit using its model outputs to build rival systems.
AI Ecosystem Challenges
One reason for the confusion around these accusations lies in a growing issue within the AI community: dataset contamination. The widespread use of AI-generated content on the open internet, ranging from Reddit posts to clickbait websites, makes it increasingly difficult to ensure clean and human-authored training data.
Models trained on this open-source slurry often converge on similar expressions, leading to overlaps in “thoughts” and phrasing. As a result, multiple models may sound alike, even without direct copying.
Still, Nathan Lambert, a researcher at the nonprofit AI institute AI2, believes it’s plausible that DeepSeek leaned on Gemini outputs.
In a recent post, he remarked: “If I was DeepSeek, I would definitely create a ton of synthetic data from the best API model out there.” He added that DeepSeek may be “short on GPUs and flush with cash,” making API-sourced data an efficient route.
If I was DeepSeek I would definitely create a ton of synthetic data from the best API model out there. Theyre short on GPUs and flush with cash. It’s literally effectively more compute for them. yes on the Gemini distill question.
— Nathan Lambert (@natolambert) June 3, 2025
Industry Response
In response to rising concerns over unauthorized model training, leading AI firms are introducing stricter access protocols. In April, OpenAI began requiring ID verification from organizations accessing advanced models, limiting this access to countries recognized by its API; China is not among them.
Meanwhile, Google has started summarizing model traces on its AI Studio platform, a move designed to make it harder for rivals to reverse-engineer Gemini’s performance. Similarly, Anthropic announced in May that it would also summarize model traces to protect its “competitive advantages.”
As the controversy continues to unfold, Google has yet to make an official statement on whether DeepSeek may have accessed or mirrored Gemini’s data.