Big Tech giants like Google, Meta, and OpenAI are locked in a high-stakes race to develop artificial general intelligence (AGI) — AI systems capable of thinking, planning, and adapting on par with humans. But according to Google DeepMind CEO Demis Hassabis, that goal remains distant, as current AI still makes surprisingly simple errors despite impressive achievements.
Speaking on the Google for Developers podcast, Hassabis described today’s AI as having “jagged intelligence” — excelling in certain domains but stumbling in basic ones. He cited Google’s latest Gemini model, enhanced with DeepThink reasoning technology, which has reached gold-medal-level performance in the International Mathematical Olympiad — one of the toughest math competitions worldwide. Yet, that same model can still make avoidable mistakes in high school-level math or fail at simple games.
“It shouldn’t be that easy for the average person to just find a trivial flaw in the system,” Hassabis remarked.
This inconsistency, he explained, is a sign that AI is far from human-level intelligence. He argued that simply scaling up models with more data and computing power will not bridge the gap to AGI. Instead, fundamental capabilities like reasoning, planning, and memory — areas still underdeveloped in even the most advanced AI — must be strengthened.
Another challenge, Hassabis noted, is the lack of rigorous testing. Many standard AI benchmarks are already saturated, creating the illusion of near-perfect performance while masking weaknesses. For example, Gemini models recently scored 99.2% on the AIME mathematics benchmark, leaving minimal room for measurable improvement. However, these results don’t necessarily mean the model is flawless.
To overcome this, Hassabis called for “new, harder benchmarks” that go beyond academic problem-solving to include intuitive physics, real-world reasoning, and “physical intelligence” — the ability to understand and interact with the physical world as humans do. He also stressed the need for robust safety benchmarks capable of detecting risks such as deceptive behavior in AI systems.
“We’re in need of new, harder benchmarks, but also broader ones, in my opinion — understanding world physics and intuitive physics and other things that we take for granted as humans,” he said.
While Hassabis has previously suggested AGI might arrive within five to ten years, he now emphasizes caution. He believes AI companies should first focus on perfecting existing models before chasing full AGI. The path ahead, he implied, is less about winning a race and more about ensuring AI’s capabilities are reliable, safe, and truly intelligent across the board.
For now, despite breakthroughs in reasoning and problem-solving, the dream of AI that matches human intelligence remains a work in progress — and one that may take longer than the industry’s most optimistic predictions.