
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
DeepSeek drops how much its R1 model cost to build. R1’s capabilities make investors question exorbitant AI spending.Nvidia declined to say if it ever plans to use Intel’s factories.
DeepSeek, the Chinese AI lab that shook up the market with its impressive open-source R1 model in January, has finally revealed the secret so many were wondering about: how it trained R1 more cheaply than the companies behind other, primarily American, frontier models.
Also: Worried about AI’s soaring energy needs? Avoiding chatbots won’t help – but 3 things could
The company wrote in a paper published Wednesday that building R1 only cost them $249,000 — a ridiculously low amount in the high-spending world of AI. For context, DeepSeek said in an earlier research paper that its V3 model, which is similar to a standard chatbot model family like Claude, cost $5.6 million to train.
That number has been disputed, with some experts questioning whether it includes all development costs (including infrastructure, R&D, data, and more) or singles out its final training run. Regardless, it’s still a fraction of what companies like OpenAI have spent building models (Sam Altman himself has estimated that GPT-4 cost north of $100 million).
That difference is also reflected in what DeepSeek charges users for R1: $0.14 for a million tokens (about 750,000 words analyzed) — compared to the $7.50 OpenAI charges for the equivalent tier.
AI models take tons of resources to build — between data, GPUs, energy and water usage for data centers, personnel costs, and more, it can be an expensive task, especially for more advanced or capable models that have bigger training data sets. For Chinese labs, there’s the added roadblock of accessing the US-made chips due to export bans intended to curb competition. DeepSeek’s reported ability to create successful models by strategically optimizing older chips also gave it a competitive edge. DeepSeek noted in the paper that it used 512 Nvidia H800 chips, a less powerful, China-specific product, to build R1.
Also: Google claims Gemma 3 reaches 98% of DeepSeek’s accuracy – using only one GPU
The paper is the most significant information drop from DeepSeek since January. Earlier this month, reports teased a new DeepSeek release coming soon.
DeepSeek’s potential threat
In January, DeepSeek’s release rocked the AI industry because of its perceived potential to pop the AI investment bubble. The efficacy of R1 put AI costs in context for investors supporting companies like OpenAI, which is currently trying to raise another $40 billion despite still not being profitable.
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Also: What Nvidia’s stunning $5 billion Intel bet means for enterprise AI and next-gen laptops
Considering AI spending is projected to hit $1.5 trillion by the end of this year, however, that bubble doesn’t seem to be bursting soon.