Semiconductor equipment manufacturer ASML and Mistral AI announced a partnership to explore the use of AI models across ASML’s product portfolio to enhance its holistic lithography systems. In addition, ASML was the lead investor in the latest funding round in the AI startup and now holds an 11% share on a fully diluted basis in Mistral AI.
The deal holds a massive symbolic weight in the era of sovereign AI and trade barriers. Although not big in the great scheme of things and especially compared with the eye-watering sums usually exchanged in the bubbly AI world, it brings together Europe’s AI superstar Mistral with the world’s only manufacturer of EUV lithography machines for AI accelerators. ASML may not be a well-known name outside the industry, but the company is a key player in global technology. Although not an acquisition, the deal is a reminder of other AI accelerators that have integrated software capabilities such as AMD with Silo AI and others. Moreover, the startup, which has never been short of US funding through VC activity, has received a financial boost at the right time when US bidders were rumoured to be hovering. Even Microsoft was said to be considering buying the company at some point. ASML now becomes its main shareholder, helping keep at bay the threat of US ownership at a critical time to reinforce one of its unique selling points: its credentials in “sovereign AI” by remaining independent from US companies.
From a technological perspective, Mistral AI has also developed a unique modus operandi, leveraging open-source models and targeting only enterprise customers, setting it apart from US competitors. In June, it launched its first reasoning model, Magistral, focused on domain-specific multilingual reasoning, code and maths. Using open source from the outset has helped it build a large developer ecosystem, long before DeepSeek’s disruption in the landscape drove competitors such as OpenAI to adopt open-source alternatives.
The company’s use of an innovative mixture of experts (MoE) architectures and other optimisations means that its models are efficient in terms of computational resources while maintaining high performance, a key competitive differentiator. This means its systems achieve high performance per compute cost, making them more cost-effective. Techniques such as sparse MoE allow scaling capacity without proportional increases in resource usage.
In February 2024, Mistral AI launched Le Chat, a multilingual conversational assistant, positioning itself against OpenAI’s ChatGPT and Google Cloud’s Gemini but with more robust privacy credentials. The company has intensified efforts to expand its business platform and tools around Le Chat, recently releasing free enterprise features such as advanced memory capabilities and capacity, and extensive third-party integrations at no cost to users. The latter includes a connectors list, built on MCP, supporting platforms such as Databricks, Snowflake, GitHub, Atlassian, and Stripe, among many others. This move will help Mistral AI penetrate the enterprise market by democratising access to advanced features and signals an ambitious strategy to achieve market dominance through integrated suites, not just applications.
Story Continues