Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Google DeepMind today pulled the curtain back on AlphaEvolve, an artificial-intelligence agent that can invent brand-new computer algorithms — then put them straight to work inside the company’s vast computing empire.
AlphaEvolve pairs Google’s Gemini large language models with an evolutionary approach that tests, refines, and improves algorithms automatically. The system has already been deployed across Google’s data centers, chip designs, and AI training systems — boosting efficiency and solving mathematical problems that have stumped researchers for decades.
“AlphaEvolve is a Gemini-powered AI coding agent that is able to make new discoveries in computing and mathematics,” explained Matej Balog, a researcher at Google DeepMind, in an interview with VentureBeat. “It can discover algorithms of remarkable complexity — spanning hundreds of lines of code with sophisticated logical structures that go far beyond simple functions.”
The system dramatically expands upon Google’s previous work with FunSearch by evolving entire codebases rather than single functions. It represents a major leap in AI’s ability to develop sophisticated algorithms for both scientific challenges and everyday computing problems.
Inside Google’s 0.7% efficiency boost: How AI-crafted algorithms run the company’s data centers
AlphaEvolve has been quietly at work inside Google for over a year. The results are already significant.
One algorithm it discovered has been powering Borg, Google’s massive cluster management system. This scheduling heuristic recovers an average of 0.7% of Google’s worldwide computing resources continuously — a staggering efficiency gain at Google’s scale.
The discovery directly targets “stranded resources” — machines that have run out of one resource type (like memory) while still having others (like CPU) available. AlphaEvolve’s solution is especially valuable because it produces simple, human-readable code that engineers can easily interpret, debug, and deploy.
The AI agent hasn’t stopped at data centers. It rewrote part of Google’s hardware design, finding a way to eliminate unnecessary bits in a crucial arithmetic circuit for Tensor Processing Units (TPUs). TPU designers validated the change for correctness, and it’s now headed into an upcoming chip design.
Perhaps most impressively, AlphaEvolve improved the very systems that power itself. It optimized a matrix multiplication kernel used to train Gemini models, achieving a 23% speedup for that operation and cutting overall training time by 1%. For AI systems that train on massive computational grids, this efficiency gain translates to substantial energy and resource savings.
“We try to identify critical pieces that can be accelerated and have as much impact as possible,” said Alexander Novikov, another DeepMind researcher, in an interview with VentureBeat. “We were able to optimize the practical running time of [a vital kernel] by 23%, which translated into 1% end-to-end savings on the entire Gemini training card.”
Breaking Strassen’s 56-year-old matrix multiplication record: AI solves what humans couldn’t
AlphaEvolve solves mathematical problems that stumped human experts for decades while advancing existing systems.
The system designed a novel gradient-based optimization procedure that discovered multiple new matrix multiplication algorithms. One discovery toppled a mathematical record that had stood for 56 years.
“What we found, to our surprise, to be honest, is that AlphaEvolve, despite being a more general technology, obtained even better results than AlphaTensor,” said Balog, referring to DeepMind’s previous specialized matrix multiplication system. “For these four by four matrices, AlphaEvolve found an algorithm that surpasses Strassen’s algorithm from 1969 for the first time in that setting.”
The breakthrough allows two 4×4 complex-valued matrices to be multiplied using 48 scalar multiplications instead of 49 — a discovery that had eluded mathematicians since Volker Strassen’s landmark work. According to the research paper, AlphaEvolve “improves the state of the art for 14 matrix multiplication algorithms.”
The system’s mathematical reach extends far beyond matrix multiplication. When tested against over 50 open problems in mathematical analysis, geometry, combinatorics, and number theory, AlphaEvolve matched state-of-the-art solutions in about 75% of cases. In approximately 20% of cases, it improved upon the best known solutions.
One victory came in the “kissing number problem” — a centuries-old geometric challenge to determine how many non-overlapping unit spheres can simultaneously touch a central sphere. In 11 dimensions, AlphaEvolve found a configuration with 593 spheres, breaking the previous record of 592.
How it works: Gemini language models plus evolution create a digital algorithm factory
What makes AlphaEvolve different from other AI coding systems is its evolutionary approach.
The system deploys both Gemini Flash (for speed) and Gemini Pro (for depth) to propose changes to existing code. These changes get tested by automated evaluators that score each variation. The most successful algorithms then guide the next round of evolution.
AlphaEvolve doesn’t just generate code from its training data. It actively explores the solution space, discovers novel approaches, and refines them through an automated evaluation process — creating solutions humans might never have conceived.
“One critical idea in our approach is that we focus on problems with clear evaluators. For any proposed solution or piece of code, we can automatically verify its validity and measure its quality,” Novikov explained. “This allows us to establish fast and reliable feedback loops to improve the system.”
This approach is particularly valuable because the system can work on any problem with a clear evaluation metric — whether it’s energy efficiency in a data center or the elegance of a mathematical proof.
From cloud computing to drug discovery: Where Google’s algorithm-inventing AI goes next
While currently deployed within Google’s infrastructure and mathematical research, AlphaEvolve’s potential reaches much further. Google DeepMind envisions applications in material sciences, drug discovery, and other fields requiring complex algorithmic solutions.
“The best human-AI collaboration can help solve open scientific challenges and also apply them at Google scale,” said Novikov, highlighting the system’s collaborative potential.
Google DeepMind is now developing a user interface with its People + AI Research team and plans to launch an Early Access Program for selected academic researchers. The company is also exploring broader availability.
The system’s flexibility marks a significant advantage. Balog noted that “at least previously, when I worked in machine learning research, it wasn’t my experience that you could build a scientific tool and immediately see real-world impact at this scale. This is quite unusual.”
As large language models advance, AlphaEvolve’s capabilities will grow alongside them. The system demonstrates an intriguing evolution in AI itself — starting within the digital confines of Google’s servers, optimizing the very hardware and software that gives it life, and now reaching outward to solve problems that have challenged human intellect for decades or centuries.