As large language models (LLMs) increasingly shape everything from work to entertainment, their growing capabilities have sparked a parallel conversation about what they cost the planet. Now, Mistral AI has published one of the most detailed environmental impact assessments of an AI model to date. In its newly released lifecycle analysis of Mistral Large 2, the company opens the black box on emissions, water consumption, and material depletion, offering hard numbers behind the growing environmental footprint of generative AI.
The report was developed in collaboration with Carbone 4 and ADEME, and peer-reviewed by Resilio and Hubblo, the study spans the model’s first 18 months of existence, ending in January 2025. The findings are sobering: training the model emitted 20,400 metric tons of CO₂ equivalent, consumed 281,000 cubic meters of water, and resulted in 660 kilograms of antimony-equivalent resource depletion. These figures reflect the toll of powering thousands of GPUs around the clock in massive data centers, an energy and resource intensive operation that’s usually hidden from the end user’s view.
Also read: 5 Ways India is Helping the Environment: From Net Zero to Beyond Plastic
The price of inference
While model training is the most resource-intensive phase, inference, the process of serving answers to users, is no free ride. Mistral reports that a typical user interaction with its assistant Le Chat, involving roughly 400 tokens, emits 1.14 grams of CO₂, uses 45 milliliters of water, and depletes 0.16 milligrams of rare earth resources. These numbers might seem modest, but they scale exponentially. When multiplied across millions of daily queries, they point to a constant, compounding environmental cost that continues long after training ends.
Mistral notes that 85.5% of the model’s total greenhouse gas emissions and 91% of its water use stem from the compute processes that power both training and inference. That proportion underscores how much of the environmental burden is tied to keeping the model running, not just building it.

Mistral’s report serves as a manifesto for environmental accountability in AI. While ethical AI discussions often center on issues like bias, misinformation, and misuse, the environmental cost of LLMs has remained relatively unexamined. Mistral wants to change that, urging the industry to adopt common environmental reporting standards and move toward greater transparency.
The company draws a parallel to other sectors where labeling and lifecycle data, on carbon emissions, energy use, or water footprint, help consumers and regulators make informed choices. In AI, such data has been notably absent. By making this information public, Mistral sets a precedent that could encourage other AI developers to disclose their own environmental impacts and compete not just on performance, but on sustainability.
Also read: Le Chat: A faster European alternative to American AI
There’s also growing interest in how such disclosures might influence procurement decisions. Enterprises and governments increasingly want to ensure that the technologies they adopt align with broader sustainability goals. With Mistral’s report as a benchmark, environmental footprint may soon become a key performance indicator for AI systems.
Responsible AI deployment
Alongside the metrics, Mistral offers practical recommendations to reduce impact. These include selecting smaller models when high performance isn’t strictly necessary, batching requests to improve compute efficiency, and choosing data centers powered by renewable energy. The report also advocates for including environmental factors in vendor selection, reinforcing the idea that green AI is not just about design, but also about deployment.
This represents a shift in how AI performance is measured. Rather than focusing solely on speed, scale, or intelligence, Mistral is pushing for a broader definition of excellence, one that includes ecological responsibility. As AI continues to scale, that perspective could become a standard, not an exception.
Mistral’s environmental audit arrives at a crucial moment. AI workloads are placing increasing strain on global infrastructure, even as countries set more ambitious climate targets. The future of generative AI, if unchecked, could quietly become one of the largest drivers of digital emissions. But it doesn’t have to be that way.
By putting numbers to what has long been an abstract conversation, Mistral is reframing the debate around AI’s future. The company’s call for transparency, best practices, and collective accountability could help steer the field toward a model of innovation that is not just powerful and safe, but sustainable as well.
In an industry that often measures progress in terms of tokens per second, Mistral’s study is a reminder that what matters just as much is what each of those tokens costs the planet.
Also read: What is Voxtral: Mistral’s open source AI audio model, key features explained