Mistral AI disclosed the environmental cost of AI with transparency on July 22, presenting what seems to be the first thorough lifecycle evaluation of a large language model. A detailed review of the French AI startup’s Mistral Large 2 model reveals that the training process alone generated 20,400 metric tons of carbon dioxide equivalent and consumed 281,000 cubic meters of water over 18 months.
This analysis adheres to stringent standards and quantifies the environmental impacts of large language models (LLMs) in terms of greenhouse gas emissions (GHG), water consumption, and resource depletion.
Despite some recent efforts, such as the Coalition for Sustainable AI, which was launched during the Paris AI Action Summit in February 2025, there is still significant work to be done, the report added. The company stated that without greater transparency, public institutions, businesses, and even users will struggle to compare models, make informed purchasing choices, fulfil extra-financial obligations, or mitigate the impacts associated with AI usage.
However, environmental sustainability standards for AI often seem to focus on companies that comply with European Union (EU) standards. One of the key reasons for that is the EU’s regulatory approach.
Why the EU Stands Out?
The EU AI Act came into force in August 2024, requiring companies to report on their energy consumption, carbon emissions and resource usage for AI systems.
Sagar Vishnoi, cofounder and director of Future Shift Labs, a global think tank, told AIM, “Europe is undoubtedly leading in formulating robust, anticipatory AI frameworks, with the EU AI Act setting a global benchmark by explicitly incorporating sustainability, risk classification, and sector-specific obligations.”
He said that this reflects the EU’s long-standing tradition of precautionary regulation, especially in tech and environmental domains. “Europe merits democratic values, a human-centric approach and a balance between innovation and compliance.”
Mistral’s audit emphasises environmental transparency as a crucial factor in enterprise AI, providing new decision-making criteria for tech leaders. This pushes organisations to incorporate ecological impact into AI procurement alongside traditional metrics, leading to more accurate total cost of ownership calculations.
These standards apply only to systems used within the EU, meaning that only companies based in the EU or serving EU users are required to comply with these policies. Global companies often do not uniformly adopt EU standards to cater to that market. In contrast to the EU, international organisations such as the Organisation for Economic Cooperation and Development (OECD), the Global Partnership on AI (GPAI), and UNESCO typically serve as norm-setting bodies.
Although their charters focus on consensus and cooperation, the companies are not legally required to follow specific policies. The OECD’s AI Principles, established in 2019, are the first intergovernmental standards on AI signed by 46 countries, but lack enforcement mechanisms. Similarly, the Global Partnership on AI (GPAI) is an advisory body without regulatory power, and UNESCO’s recommendations, endorsed by 193 member states, also lack a global monitoring system.
India and Sustainable AI Compliance
Vishnoi believes that India is also aligning its regulatory apparatus to address similar priorities.
He noted that the Digital Personal Data Protection Act (2023) establishes a fiduciary accountability model. Additionally, NITI Aayog’s Responsible AI strategy emphasises transparency, fairness, and environmental impact. Initiatives like IndiaAI and public-private partnerships under Startup India are fostering socially and environmentally responsible AI innovation.
Many AI companies often avoid setting environmental and sustainability goals due to the ambiguous nature of these initiatives.
Narendra Makwana, co-founder and CEO of GreenStitch.io, previously mentioned to AIM that only Indian companies with European customers report on sustainability. This is largely due to the strict policies regarding ethics and energy consumption related to AI. He believes that complying with the EU framework is manageable, as customers and partners in these regions recognise the significance of adhering to legally binding policy requirements.
Vishnoi argued that while India’s policy landscape may seem less centralised compared to Europe’s, its adaptive, sectoral approach, particularly in agriculture, health, and education, aims to integrate sustainability into the AI development lifecycle. “In this way, India is not only catching up with global standards but also adapting them to align with its democratic and developmental goals,” he added.
There is no evidence that Indian AI first companies are voluntarily publishing detailed model-level environmental metrics, similar to Mistral AI’s.
Currently, most Indian AI startups and mid-sized firms don’t disclose their impact metrics. Some larger IT companies, such as Infosys, TCS, and Wipro, release broader sustainability reports; however, these reports are not specific to AI.
Srinivas Bhat, director at IBM Z development, said that during the design phase, the company focuses on operating speed, nanometer technology, and power consumption. After chip release, extensive testing includes stress tests for power consumption and performance. He asserted that the performance team assesses changes when launching new operating systems, ensuring the company meets both power and performance goals.
However, this may not be limited to India, as the company is larger, and it becomes mandatory at certain points to report its energy consumption.
In late April, India’s Securities and Exchange Board of India (SEBI) updated its Environmental, Social, and Governance (ESG) mandate, postponing the rollout of value chain disclosure and assurance by one year. Companies will now have more flexibility in choosing between assessment and assurance.
The regulatory board aims to reduce the compliance burden on these listed companies; however, this could undermine the goal of holding large firms accountable for their environmental impact.
In the future, it may be beneficial for all AI companies in India and around the world to adhere to regulatory requirements, even if they are not mandatory. Looking out for the planet is also an added bonus.