Like finding the best solution in a game of chess or the board game Go, artificial intelligence could help accelerate the discovery of drugs from a decade to potentially weeks, according to Nobel laureate and Cambridge alumnus Sir Demis Hassabis.
The chief executive officer and co-founder of Google DeepMind described a new era of ‘digital biology’ – and offered his insights into a future of artificial general intelligence, helping us to understand our place in the universe.

Sir Demis was speaking at a special event in Cambridge, exploring how AI can accelerate scientific discovery, bringing “digital speed” to drug development.
It came after he was jointly awarded the Nobel Prize in Chemistry with Google DeepMind colleague Dr John Jumper for their AI research contributions to the prediction of protein structures.
“Right now it takes an average of 10 years for a drug to be developed, and it’s extraordinarily expensive, billions and billions of dollars,” he said. “And so I’m thinking, why can’t we use these techniques to reduce that down from years to months? Maybe even, one day, weeks? Just like we reduced down the discovery of protein structures from potentially years down to minutes and seconds.”
Speaking to an audience of Cambridge students and alumni, he spoke of described AI as potentially the “perfect description language for biology”.
He was speaking at Babbage Lecture Theatre, where he attended his first lecture as a student almost 30 years ago.
Sir Demis studied computer science at Queens’ College as an undergraduate in the 1990s – a subject that had fascinated him throughout his childhood.
“My journey on AI started with games, and specifically chess,” he said. “I was playing chess from the age of four and it got me thinking about thinking itself – how does our mind come up with these plans, with these ideas, how do we problem solve, and how can we improve? What was fascinating to me, perhaps more fascinating than even the games, was the actual mental processes behind it.”
He said he used games “to train my own mind” and his interest was fostered by playing computer chess.
“I remember being fascinated by the fact that someone had programmed this lump of inanimate plastic to actually play chess really well against you.
“And I ended up experimenting myself in my early teenage years with an Amiga 500 computer, and building those kinds of AI programs to play games like Othello. And really, that was my first taste of AI, and I decided from very early on that I would spend my entire career trying to push the frontiers of this technology.”
He co-founded DeepMind in 2010 and the company developed AI models to master popular games. Inspired by neuroscience, it created learning systems that also mastered Atari’s catalogue of games, like Space Invaders and Pong.
Sir Demis described computer games as the “perfect proving ground” for AI systems.
DeepMind was acquired by Google in 2014 for an undisclosed fee reported to be in the region of £400million.
It came to global attention two years later when it solved what was considered one of the loftiest achievements for AI – mastering the ancient Chinese game of Go. Go is a googol times more complex than chess.”
There are 10 to the power of 170 possible board configurations, which is said to be more than the number of atoms in the visible universe. AlphaGo played different versions of itself thousands of times, to learn from its mistakes. This reinforcement learning eventually led to it defeating world champion Lee Sedol in March 2016 in an event watched by 200 million people. It was said to be a decade ahead of its time.
“Not only did AlphaGo, our system, win that match, importantly it actually came up with new original Go strategies. Even though we’ve played Go for thousands of years and professionally for hundreds of years, it was still able to find never-seen-before strategies,” he said. “My dream was to generalise this to all areas of scientific discovery.”

Systems can be trained, he explained, by asking them to self-play any two-player game perhaps 100,000 times. That creates a new database of game positions. From that, you can train a second version – a slightly better version – that is trained to predict what the likely moves are to be played in any one position and also who is more likely to win.
“Then you can use that version two to play against version one in a 100-game match off and if it wins by a significant margin, you replace the version one with version two and create a new database of games that are slightly higher quality and then you learn a version three system.
“If you do this and repeat it around 17, 18 times, you go from playing randomly in the morning to 24 hours or less later, by version 17 or 18, you are stronger than world champion level. It’s quite an incredible thing.”
These neural networks reduce down “intractable search space” to something much more tractable “in a few minutes of compute time”.
In chess, DeepMind’s AlphaZero was even able to beat the very strong open-source chess engine Stockfish by playing what, for computers, was a new style of playing – prizing the mobility of its chess pieces over the number of them.
World champion Magnus Carlsen said he had been influenced by AlphaZero, describing it as one of his “heroes”.
Sir Demis said: “I felt that we were ready, we had the techniques that were mature enough and ready to now be applied outside of games and to try and tackle really meaningful problems.”
The challenge of protein folding was just such a problem.
Being able to predict the 3D structure of a protein from its amino acid sequence is valuable because it can help us discover and design new drugs, and understand disease.
The function of proteins – the building blocks of life – are related to their structures but, as Cyrus Levinthal noted in 1969, it would take longer than the age of the known universe to enumerate all possible configurations of a typical protein by brute force calculation.
And yet, as Sir Demis noted, “physics does solve this problem” because in nature proteins somehow fold spontaneously, within milliseconds.
The challenge of predicting protein structures is one scientists had been working on for at least 50 years when, in November 2020, DeepMind’s AlphaFold2 tool was declared to have solved it by CASP – the Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction – a biennial competition, established in 1994. It opened up the potential for biologists to use computational structure prediction as a key tool in their research.

DeepMind used AlphaFold2 to fold all 200 million proteins known to science, working with the EBML-EBI on the Wellcome Genome Campus to make the system and all these structures freely and openly available for anyone to use.
“It’s kind of like a billion years of PhD time done in one year,” said Sir Demis. “And it’s amazing to think about how much science could be accelerated. Two million researchers are using it from pretty much every country in the world. It’s been cited over 30,000 times now and it’s become a standard tool in biology research.”
These structures exist across much of life on Earth, meaning new avenues have opened up for research, not only in drug discovery but into fields such as agriculture and climate.
AlphaFold is being used to tackle plastic pollution, antibiotic resistance and neglected diseases, Sir Demis said.
“The mission of DeepMind from the beginning was about building AI responsibly to benefit humanity, but the way we used to articulate it when we started out was a two-step process, step 1 – solve artificial intelligence, step 2 – use it to solve everything else.
“If I look at all the work we’ve done in the last 15 years, first of all our games work, and then now with the scientific work that we’ve been working on, it’s all about making this search ‘tractable’.
“You have this incredibly complex problem, and there’s many possible solutions to the problem, and you’ve got to find the optimal solution – kind of like a needle and a haystack. And you can’t do it by brute force, so you have to learn this neural network model, so that you can efficiently guide the search and find the optimal solution.
“I think AI will be applicable to pretty much every field, and I think there are many, many advances to be made over the next five to 10 years by doing that,” said Sir Demis.

This future could lead to the development of artificial general intelligence, a theoretical AI system capable of the same kinds of cognitive tasks that a human can do.
On the path to AGI, Google DeepMind is already making advances in AI’s understanding of the physics of the real world.
Sir Demis noted its new Veo 2 state-of-the-art video generation tool generates videos from a text description, while Genie 2 can generate a computer game based on a single prompt.
Stressing the importance of AI safety and the responsibility that comes with this technology, Sir Demis said Google DeepMind’s SynthID tool invisibly watermarked AI-generated content, so that synthetically generated images, audio, text or video can be detected.
“AI has this incredible potential to help with our greatest challenges, from climate to health. But it is going to affect everyone, so I think it’s really important that we engage with a wide range of stakeholders from society. And I think that’s going to become increasingly important given the exponential improvement that we’re seeing with these technologies,” he said.
The Silicon Valley mantra for other technologies of “move fast and break things” was not, he said, appropriate for AI.
Sir Demis was “very excited” by Google DeepMind’s work on a research prototype assistant that can understand the world around us – the next generation of virtual assistant technology, or ‘universal assistants’.
“We call it ‘Project Astra’, where you have it on your phone or some other devices, maybe glasses. It’s an assistant you can take around with you in the real world and it helps you in everyday life,” he said.
But the next step for AI is building planning systems, as seen in AlphaGo, that can search and find the best solutions to problems on top of world models like Google Gemini that understand how the real world works.
This combination could plan and achieve things in the real world, said Sir Demis, explaining: “That’s key to things like robotics working, which I think in the next two or three years is going to be a huge area that’s going to have massive advances.”
Will we need massively powerful quantum computing to make the next leaps?
Sir Demis described himself as “Turing’s champion” in reference to Alan Turing, the Second World War code-breaker often described as the father of modern computing.
“How far can these Turing machines and the idea of classical computing go?” he asked. “There are a lot of things that are thought to require quantum computing to solve. My conjecture is that actually classical Turing machines that these types of AI systems are built on can do a lot more than we previously gave them credit for.
“If you think about AlphaFold and protein folding – proteins are quantum systems, they operate at the atomic scale and one might think you need quantum simulations to actually be able to find the structures of proteins. And yet we were able to approximate those solutions with our neural networks.
“And so one potential idea is that any pattern that can be generated or found in nature can be efficiently discovered and modelled by one of these classical learning algorithms. And if that turns out to be true, it has all sorts of implications for quantum mechanics and actually fundamental physics, which is something that I hope to explore.
“Maybe these classical systems will help us uncover what the true nature of reality might be.
“And that leads me back to the whole reason I started my path on AI many, many years ago. I always believed that AGI built in this way could be the ultimate general purpose tool to understand the universe around us and our place in it.”