TEMPO.CO, Jakarta – DeepSeek-R1-0528, the latest AI model made by DeepSeek, is suspected of being trained with a derivative model from Google’s Gemini. Sam Paech, an AI analyst, has examined the artificial intelligence service with bioinformatics tools. He concluded the lineage that was dissected from DeepSeek and observed similarities with Gemini’s responses.
“If you are wondering why DeepSeek R1 sounds a bit different, I think they probably switched from training on synthetic OpenAI to synthetic Gemini outputs,” Paech tweeted from an X account on Friday, May 30, 2025.
Paech also reviewed the HuggingFace developer community site, an open-source community platform. The analysis was run through Paech’s GitHub developer code account.
DeepSeek released an updated DeepSeek-R1-0528 model through HuggingFace in May 2025. The company claims that the AI model’s inference is becoming deeper. This artificial intelligence also utilizes increased computational resources and introduces algorithmic optimization mechanisms during post-training.
This latest DeepSeek model shows outstanding performance on various evaluation benchmarks, including mathematics, programming, and general logic. “Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro,” DeepSeek stated on HuggingFace.
Sam Paech also presented a screenshot of EQ-Bench regarding the evaluation results of AI models. It shows a series of Google’s development model versions: Gemini 2.5 Pro, Gemini 2.5 Flash, and Gemma 3.
Citing TechCrunch, the evidence of training by Gemini is not strong, although some other developers also claim to have found traces of Gemini.
Accusations of training using a competitor’s AI model data have also targeted DeepSeek in December 2024. In December, several application developers observed that DeepSeek’s V3 model often identified itself as ChatGPT.
Editor’s Choice: AI Boom: Top 10 Richest Emerging Tech Billionaires in 2025
Click here to get the latest news updates from Tempo on Google News