Mistral AI says it was able to train its large language models 2.5 times faster with the Nvidia GB200 NVL72 system on CoreWeave than previous hardware generations.
In a blog post published by CoreWeave, the AI cloud provider detailed how Mistral AI is using its hardware for training workloads.
Mistral AI has been a CoreWeave customer since 2023, initially signing on for access to Nvidia H100s. This has since been extended to include H200s, and GB200 clusters with the NVL72 rack set up.
The GB200 NVL72-based instances on CoreWeave connect 36 Nvidia Grace CPUs and 72 Nvidia Blackwell GPUs in a liquid-cooled, rack-scale design and are available as bare-metal instances through CoreWeave Kubernetes Service, and are scalable up to 110,000 GPUs.
According to CoreWeave, Mistral was one of the first to access the Nvidia GB200 NVL72 racks on their platform, and has been able to train its AI models 2.5x faster than with H200s, and “vastly exceed” performance and efficiency on the H100s.
Mistral AI’s CTO, Timothée Lacroix, said: “CoreWeave is one of the few providers that has real experience at very large scale for exactly what we do, so large language model training.”
He added: “[Our models] were trained 100 percent on CoreWeave infrastructure. I think not being on that kind of infrastructure would’ve delayed us by at least a few months.”
Mistral utilizes CoreWeave’s Slurm on Kubernetes solution, and the cloud provider’s observability platform to visualize entire fleets of GPUs in one place.
CoreWeave began offering the Nvidia GB200 GPUs “at scale” in April of this year, first displaying a demonstration of a GB200 NVL72 system at one of its data centers in November of last year. That demonstration cluster delivered up to 1.4 exaFLOPS of AI compute.
In July 2025, Mistral AI was reportedly in talks to secure equity and debt financing, potentially as $1bn in equity from MGX.
Mistral AI was founded in 2023 and valued at $6.2bn (€5.8bn) during its last funding round. The company is a French AI startup specializing in open-source large language models (LLMs).
Its flagship LLM, Le Chat, is regarded as Europe’s most competitive AI offering when stacked up against alternatives in America and China.