Google DeepMind has launched AlphaEvolve, a new coding agent that uses large language models to evolve and optimise algorithms across computing and mathematics.
Powered by Gemini Flash and Gemini Pro, AlphaEvolve pairs model-generated code with automated evaluators to verify, score, and evolve high-performing solutions.
“AlphaEvolve is an agent that can go beyond single function discovery to evolve entire codebases and develop much more complex algorithms,” stated Google DeepMind in its blog post.
Google DeepMind is planning an early access programme for selected academic users and is also exploring ways to make AlphaEvolve more broadly available. Interested users can register their interest through a dedicated form.
The company believes AlphaEvolve could be transformative across multiple fields, including materials science, drug discovery, sustainability, and broader technological and business applications.
The system integrates prompt sampling, language model outputs, and program evaluation through an evolutionary algorithm framework.
Over the past year, AlphaEvolve has been used to improve data centre scheduling, hardware design, and AI training workflows across Google. One deployment optimised Borg, Google’s data centre orchestrator, recovering 0.7% of compute resources globally. “This solution, now in production for over a year, continuously recovers, on average, 0.7% of Google’s worldwide compute resources,” the company said.
AlphaEvolve also contributed to a Tensor Processing Unit (TPU) design. It suggested a Verilog-level change that removed redundant bits in a key arithmetic circuit. Google said this proposal passed verification tests and was integrated into an upcoming TPU release.
In AI training, AlphaEvolve optimised matrix multiplication in the Gemini architecture, speeding up a core kernel by 23% and cutting training time by 1%. It also improved FlashAttention kernel performance by 32.5%, a domain typically untouched by human engineers due to compiler-level optimisation.
Beyond infrastructure, AlphaEvolve tackled algorithmic challenges in mathematics. It discovered a new method to multiply 4×4 complex-valued matrices using 48 scalar multiplications, improving on the 1969 Strassen algorithm. “This finding demonstrates a significant advance over our previous work, AlphaTensor,” the company said.
Applied to over 50 open problems across mathematics, AlphaEvolve rediscovered known solutions in 75% of cases and improved on 20%. One of its advances was in the kissing number problem, where it found a configuration of 593 spheres touching a unit sphere in 11 dimensions, establishing a new lower bound.