Xiaomi Corp. today released MiMo-7B, a new family of reasoning models that it claims can outperform OpenAI’s o1-mini at some tasks.
The algorithm series is available under an open-source license. Its launch coincides with DeepSeek’s release of an update to Prover, a competing open-source reasoning model. The latter algorithm has a narrower focus than MiMo-7B: it’s designed to help mathematicians prove theorems.
The algorithms in Xiaomi’s MiMo-7B series have about seven billion parameters. There’s a base model, as well as enhanced versions of that model that offer increased output quality.
Xiaomi developed the enhanced versions using two machine learning techniques called supervised fine-tuning and reinforcement learning. Both methods improve AI models by providing them with additional training data. The datasets used in supervised fine-tuning include explainers that help guide the AI training workflow, while reinforcement learning doesn’t use such explainers.
Xiaomi has developed three enhanced versions of the MiMo-7B base model. It fined-tuned one version using supervised fine-tuning, another with reinforcement learning and a third using both methods. According to the company, that third model is better at OpenAI’s o1-mini at generating code and solving math problems.
The base MiMo-7B model is less capable than the fine-tuned versions, but can still outdo significantly larger algorithms. “Our RL experiments from MiMo-7B-Base show that our model possesses extraordinary reasoning potential, even surpassing much larger 32B models,” Xiaomi researchers detailed on GitHub.
The MiMo-7B series is not the only new entry into the open-source AI ecosystem that debuted today. DeepSeek quietly released an enhanced version of Prover, a reasoning model optimized to prove mathematical theorems that it first debuted last year. Prover-V2, as the upgraded model is called, promises to provide “state-of-the-art performance in neural theorem proving.”
DeepSeek trained Prover-V2 through a multi-step process. The company started by assembling a collection of theorems for which proofs are already available. In the next step, DeepSeek used two language models to create a step-by-step explanation of how mathematicians arrived at each proof. The company subsequently entered these AI-generation explanations into Prover V2 to teach the model how to generate its own proofs.
“This process enables us to integrate both informal and formal mathematical reasoning into a unified model,” DeepSeek researchers explained.
The release of MiMo-7B and Prover-V2 comes days after Alibaba Group Holding Ltd introduced Qwen3, its new flagship family of reasoning-optimized models. The algorithms in the series range in size from 600 million to 235 billion parameters. Alibaba claims that Qwen3 can outperform OpenAI’s o1 and DeepSeek’s flagship R1 reasoning model across a range of tasks.
Image: Unsplash
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU