Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
At Google I/O this week, amid the usual parade of dazzling product demos and AI-powered announcements, something unusual happened: Google declared war — quietly — in the race to build artificial general intelligence (AGI).
“We fully intend that Gemini will be the very first AGI,” said Google co-founder Sergey Brin, who made a surprise, unscheduled appearance at what was originally planned as a solo fireside chat with Demis Hassabis, CEO of DeepMind, Google’s AI research powerhouse. The conversation, hosted by Big Technology founder Alex Kantrowitz, pressed both men on the future of intelligence, scale, and the evolving definition of what it means for a machine to think.
The moment was fleeting, but unmistakable. In a field where most players hedge their talk of AGI with caveats — or avoid the term altogether — Brin’s comment stood out. It marked the first time a Google executive has explicitly stated an intent to win the AGI race, a contest often associated more with Silicon Valley rivals like OpenAI and Elon Musk than with the search giant.
Yet Brin’s boldness contrasted sharply with the caution expressed by Hassabis, a former neuroscientist and game developer whose vision has long steered DeepMind’s approach to AI. While Brin framed AGI as an imminent milestone and competitive objective, Hassabis called for clarity, restraint, and scientific precision.
“What I’m interested in, and what I would call AGI, is really a more theoretical construct, which is, what is the human brain as an architecture able to do?” Hassabis explained. “It’s clear to me today, systems don’t have that. And then the other thing that why I think it’s sort of overblown the hype today on AGI is that our systems are not consistent enough to be considered to be fully General. Yet they’re quite general.”
This philosophical tension between Brin and Hassabis — one chasing scale and first-mover advantage, the other warning of overreach — may define Google’s future as much as any product launch.
Inside Google’s AGI timeline: Why Brin and Hassabis disagree on when superintelligence will arrive
The contrast between the two executives became even more apparent when Kantrowitz posed a simple question: AGI before or after 2030?
“Before,” Brin answered without hesitation.
“Just after,” Hassabis countered with a smile, prompting Brin to joke that Hassabis was “sandbagging.”
This five-second exchange encapsulates the subtle but significant tension in Google’s AGI strategy. While both men clearly believe powerful AI systems are coming this decade, their different timelines reflect fundamentally different approaches to the technology’s development.
Hassabis took pains throughout the conversation to establish a more rigorous definition of AGI than is commonly used in industry discussions. For him, the human brain serves as “an important reference point, because it’s the only evidence we have, maybe in the universe, that general intelligence is possible.”
True AGI, in his view, would require showing “your system was capable of doing the range of things even the best humans in history were able to do with the same brain architecture. It’s not one brain but the same brain architecture. So what Einstein did, what Mozart was able to do, what Marie Curie and so on.”
By contrast, Brin’s focus appeared more oriented toward competitive positioning than scientific precision. When asked about his return to day-to-day technical work at Google, Brin explained: “As a computer scientist, it’s a very unique time in history, like, honestly, anybody who’s a computer scientist should not be retired right now. Should be working on AI.”
DeepMind’s scientific roadmap clashes with Google’s competitive AGI strategy
Despite their different emphases, both leaders outlined similar technical challenges that need to be solved on the path to more advanced AI.
Hassabis identified several specific barriers, noting that “to get all the way to something like AGI, I think may require one or two more new breakthroughs.” He pointed to limitations in current systems’ reasoning abilities, creative invention, and the accuracy of their “world models.”
“For me, for something to be called AGI, it would need to be consistent, much more consistent across the board than it is today,” Hassabis explained. “It should take, like, a couple of months for maybe a team of experts to find a hole in it, an obvious hole in it, whereas today, it takes an individual minutes to find that.”
Both executives agreed on the importance of “thinking” capabilities in AI systems. Google’s newly announced “deep think” feature, which allows AI models to engage in parallel reasoning processes that check each other, represents a step in this direction.
“We’ve always been big believers in what we’re now calling this thinking paradigm,” Hassabis said, referencing DeepMind’s early work on systems like AlphaGo. “If you look at a game like chess or go… we had versions of AlphaGo and AlphaZero with the thinking turned off. So it was just the model telling you its first idea. And, you know, it’s not bad. It’s maybe like master level… But then if you turn the thinking on, it’s been way beyond World Champion level.”
Brin concurred, adding: “Most of us, we get some benefit by thinking before we speak. And although not always, I was reminded to do that, but I think that the AIs obviously, are much stronger once you add that capability.”
Beyond scale: How Google is betting on algorithmic breakthroughs to win the AGI race
When pressed on whether scaling current models or developing new algorithmic approaches would drive progress, both leaders emphasized the need for both — though with slightly different emphases.
“I’ve always been of the opinion you need both,” Hassabis said. “You need to scale to the maximum the techniques that you know about. You want to exploit them to the limit, whether that’s data or compute, scale, and at the same time, you want to spend a bunch of effort on what’s coming next.”
Brin agreed but added a notable historical perspective: “If you look at things like the N-body problem and simulating just gravitational bodies… as you plot it, the algorithmic advances have actually beaten out the computational advances, even with Moore’s law. If I had to guess, I would say the algorithmic advances are probably going to be even more significant than the computational advances.”
This emphasis on algorithmic innovation over pure computational scale aligns with Google’s recent research focus, including the Alpha-Evolve system announced last week that uses AI to improve AI algorithms.
Google’s multimodal vision: Why camera-first AI gives Gemini a strategic advantage
An area of clear alignment between the two executives was the importance of AI systems that can process and generate multiple modalities — particularly visual information.
Unlike competitors whose AI demos often emphasize voice assistants or text-based interactions, Google’s vision for AI heavily incorporates cameras and visual processing. This was evident in the company’s announcement of new smart glasses and the emphasis on computer vision throughout its I/O presentations.
“Gemini was built from the beginning, even the earliest versions, to be multimodal,” Hassabis explained. “That made it harder at the start… but in the end, I think we’re reaping the benefits of those decisions now.”
Hassabis identified two key applications for vision-capable AI: “a truly useful assistant that can come around with you in your daily life, not just stuck on your computer or one device,” and robotics, where he believes the bottleneck has always been the “software intelligence” rather than hardware.
“I’ve always felt that the universal assistant is the killer app for smart glasses,” Hassabis added, a statement that positions Google’s newly announced device as central to its AI strategy.
Navigating AI safety: How Google plans to build AGI without breaking the internet
Both executives acknowledged the risks that come with rapid AI development, particularly with generative capabilities.
When asked about video generation and the potential for model degradation from training on AI-generated content — a phenomenon some researchers call “model collapse” — Hassabis outlined Google’s approach to responsible development.
“We’re very rigorous with our data quality management and curation,” he said. “For all of our generative models, we attach SynthID to them, so there’s this invisible AI-made watermark that is pretty, very robust, has held up now for a year, 18 months since we released it.”
The concern about responsible development extends to AGI itself. When asked whether one company would dominate the landscape, Hassabis suggested that after the first systems are built, “we can imagine using them to shard off many systems that have safe architectures, sort of built under… provably underneath them.”
From simulation theory to AGI: The philosophical divide between Google’s AI leaders
Perhaps the most revealing moment came at the end of the conversation, when Kantrowitz asked a lighthearted question about whether we live in a simulation — inspired by a cryptic tweet from Hassabis.
Nature to simulation at the press of a button, does make you wonder… ♾? https://t.co/lU77WHio4L
— Demis Hassabis (@demishassabis) May 7, 2025
Even here, the philosophical differences between the two executives were apparent. Hassabis offered a nuanced perspective: “I don’t think this is some kind of game, even though I wrote a lot of games. I do think that ultimately, underlying physics is information theory. So I do think we’re in a computational universe, but it’s not just a straightforward simulation.”
Brin, meanwhile, approached the question with logical precision: “If we’re in a simulation, then by the same argument, whatever beings are making the simulation are themselves in a simulation for roughly the same reasons, and so on so forth. So I think you’re going to have to either accept that we’re in an infinite stack of simulations or that there’s got to be some stopping criteria.”
The exchange captured the essential dynamic between the two: Hassabis the philosopher-scientist, approaching questions with nuance and from first principles; Brin the pragmatic engineer, breaking problems down into logical components.
Brin’s declaration during his Google I/O appearance marks a seismic shift in the AGI race. By explicitly stating Google’s intent to win, he’s abandoned the company’s previous restraint and directly challenged OpenAI’s position as the perceived AGI frontrunner.
This is no small matter. For years, OpenAI has owned the AGI narrative while Google carefully avoided such bold proclamations. Sam Altman has relentlessly positioned OpenAI’s entire existence around the pursuit of artificial general intelligence, turning what was once an esoteric technical concept into both a corporate mission and cultural touchstone. His frequent hints about GPT-5’s capabilities and vague but tantalizing comments about artificial superintelligence have kept OpenAI in headlines and investor decks.
OPENAI ROADMAP UPDATE FOR GPT-4.5 and GPT-5:
We want to do a better job of sharing our intended roadmap, and a much better job simplifying our product offerings.
We want AI to “just work” for you; we realize how complicated our model and product offerings have gotten.
We hate…
— Sam Altman (@sama) February 12, 2025
By deploying Brin — not just any executive, but a founder with near-mythic status in Silicon Valley — Google has effectively announced it won’t cede this territory without a fight. The move carries special weight coming from Brin, who rarely makes public appearances but commands extraordinary respect among engineers and investors alike.
The timing couldn’t be more significant. With Microsoft’s backing giving OpenAI seemingly limitless resources, and Meta’s aggressive open-source strategy threatening to commoditize certain aspects of AI development, Google needed to reassert its position at the vanguard of AI research. Brin’s statement does exactly that, serving as both a rallying cry for Google’s AI talent and a shot across the bow to competitors.
What makes this three-way contest particularly fascinating is how differently each company approaches the AGI challenge. OpenAI has bet on tight secrecy around training methods paired with splashy consumer products. Meta emphasizes open research and democratized access. Google, with this new positioning, appears to be staking out middle ground: the scientific rigor of DeepMind combined with the competitive urgency embodied by Brin’s return.
What Google’s AGI gambit means for the future of AI innovation
As Google continues its push toward more powerful AI systems, the balance between these approaches will likely determine its success in what has become an increasingly competitive field.
Google’s decision to bring Brin back into day-to-day operations while maintaining Hassabis’s leadership at DeepMind suggests an understanding that both competitive drive and scientific rigor are necessary components of its AI strategy.
Whether Gemini will indeed become “the very first AGI,” as Brin confidently predicted, remains to be seen. But the conversation at I/O made clear that Google is now openly competing in a race it had previously approached with more caution.
For an industry watching every signal from AI’s major players, Brin’s declaration represents a significant shift in tone — one that may pressure competitors to accelerate their own timelines, even as voices like Hassabis continue to advocate for careful definitions and responsible development.
In this tension between speed and science, Google may have found its unique position in the AGI race: ambitious enough to compete, cautious enough to do it right.