NVIDIA has released OpenReasoning-Nemotron, a new family of powerful and efficient open-source AI models. Announced on July 19, 2025, and available globally via Hugging Face, the models set new performance records for reasoning in math, science, and code.
NVIDIA developed four models by distilling capabilities from DeepSeek’s massive 671B R1-0528 model, one of China’s top AI systems. This strategy provides developers with free, commercially permissive access to state-of-the-art reasoning AI.
The release aims to democratize advanced capabilities, making them available in 1.5B, 7B, 14B, and 32B parameter sizes. This avoids the need for frontier-scale computing resources, opening doors for smaller teams and researchers.
Distilling Power From a Frontier Model
At the core of OpenReasoning-Nemotron is a sophisticated distillation strategy. NVIDIA leveraged the recently upgraded DeepSeek-R1-0528 model, a 671-billion parameter powerhouse, to teach smaller models its advanced reasoning skills.
This was achieved by training on a curated dataset of 5 million high-quality reasoning examples generated by the DeepSeek model. The process transfers generalized reasoning ability into more compact architectures based on Alibaba’s Qwen 2.5 framework.
The results are impressive. According to NVIDIA’s benchmarks, the Nemotron models establish new state-of-the-art pass@1 scores for their size classes. The 32B model, for instance, scores 89.2 on AIME24 and 70.2 on LiveCodeBench without special tuning.
For even more demanding tasks, NVIDIA introduced a “heavy” mode using a technique called Generative Selection (GenSelect). This method generates multiple potential solutions and uses the model to select the best one, significantly boosting accuracy on complex problems.
With GenSelect, the 32B model’s score on the HMMT Feb 2025 math benchmark jumps from 73.8 to an incredible 96.7, demonstrating powerful emergent reasoning capabilities at scale.
The DeepSeek Connection and Geopolitical Context
The choice of DeepSeek’s model as the source is a major validation of the Chinese firm’s technology. DeepSeek made waves in May 2025 with its R1-0528 update, claiming its performance was “approaching that of leading models, such as O3 and Gemini 2.5 Pro,” from competitors like OpenAI and Google.
This progress occurs amid intense geopolitical headwinds. In April 2025, a US House Committee labeled DeepSeek a national security risk. Committee Chairman John Moolenaar issued a stark warning about the company.
He stated, “This report makes it clear: DeepSeek isn’t just another AI app — it’s a weapon in the Chinese Communist Party’s arsenal, designed to spy on Americans, steal our technology, and subvert U.S. law.” NVIDIA’s use of the model highlights the interconnected nature of global AI development.
NVIDIA has released all four OpenReasoning-Nemotron models under a commercially permissive license. They are designed for easy integration with tools like the NVIDIA NeMo framework, TensorRT-LLM, and Hugging Face Transformers, facilitating rapid deployment.
By using only Supervised Fine-Tuning (SFT) and avoiding Reinforcement Learning (RL), NVIDIA provides a strong, stable baseline. This allows the research community to build upon these models to explore new RL techniques for reasoning, potentially accelerating the entire field.