French AI startup Mistral has unveiled Magistral, its first reasoning-focused language model. It aims to enhance transparent, multilingual, and domain-specific problem solving.
Released in two variants, Magistral Small (24B, open-weight) and Magistral Medium (enterprise-grade), the model is designed for tasks that require step-by-step deliberation. Mistral says Magistral improves on the limitations of earlier models by offering more consistent reasoning in multiple languages and traceable logic across disciplines.
This comes right after the release of its new enterprise-grade Document AI platform, which claims to set a new benchmark in speed and accuracy for OCR-based document processing.
Magistral Medium scored 73.6% on the 2024 AIME benchmark and 90% with majority voting at 64-shot prompts. The open Magistral Small model scored 70.7% and 83.3%, respectively. The company mentioned that both models are tuned for legal research, financial modelling, software engineering, and regulated sectors like healthcare and government.
“Magistral is fine-tuned for multi-step logic, improving interpretability and providing a traceable thought process in the user’s language, unlike general-purpose models,” the company wrote in the blog post. It supports reasoning in English, French, Arabic, German, Chinese, and several other languages.
Mistral is also integrating the model into its Le Chat assistant, where a new ‘Flash Answers’ mode delivers responses at 10x the speed of competing systems. Magistral Medium is already accessible on La Plateforme and Amazon SageMaker, with support coming soon to IBM WatsonX, Azure AI, and Google Cloud.
The company has released Magistral Small under Apache 2.0 on Hugging Face and shared a research paper detailing the model’s training, infrastructure, and evaluation methodology. Mistral plans to iterate rapidly on the architecture, encouraging the developer community to build on its transparent reasoning framework.