
(Tharin Kaewkanya/Shutterstock)
What will AI look like in 2030, just five short years from now? A Google DeepMind-commissioned study suggests that if current scaling trends continue, AI could soon operate at scales once thought unattainable, with major implications for research and development.
The report was produced by nonprofit research group Epoch AI and argues that exponential growth in compute, data, and investment could continue through the end of this decade, powering AI models that are 1,000 times more computationally intensive than today. That scale, the authors say, will push AI to new frontiers in desk-based science, from automating code and proofs to improving weather forecasts. But translating those digital breakthroughs into physical products such as new drugs or materials will take longer, limited by factors outside AI’s control.
Scaling as the Driver
The report frames scaling as the main driver of AI progress. Training compute has grown about four to five times annually since 2010, and Epoch AI expects that trajectory to continue if investment and infrastructure keep pace. The report cites how the largest AI clusters of 2020 had peak performance in the exascale range, or about 10^18 FLOP/s. If current scaling trends continue, the report says clusters used for training frontier AI could cost over $100 billion by 2030.

If current trends persist, the clusters used for training frontier AI would cost over $100B by 2030 and could support training runs of about 10^29 FLOP/s, Epoch AI asserts. (Source: Epoch AI)
“Such clusters could support training runs of about 10^29 FLOP – a quantity of compute that would have required running the largest AI cluster of 2020 continuously for over 3,000 years,” according to Epoch AI.
That 10^29 FLOP/s estimate is light-years beyond exascale and puts into perspective the progress made so far in scaling compute, but reaching that scale in the next five years may sound quite far-fetched to those who have witnessed the journey to exascale computing. To that, the authors say, what looks extreme at first glance is simply the logical outcome of extrapolating the curves that have held steady for more than a decade.
“This exemplifies a repeating pattern in our findings: if today’s trends continue, they will lead to extreme outcomes. Should we believe they will continue? Over the past decade, extrapolation has been a strong baseline, and when we investigate arguments for a forthcoming slowdown, they are often not compelling.”
Could Scaling Slow Down?
One of the most common arguments is that scaling could soon “hit a wall,” with models failing to improve even with more compute. The report acknowledges this possibility but points out that recent models have continued to post strong results on benchmarks while also generating unprecedented revenue. There is not yet clear evidence that scaling is losing its effectiveness, though the chance cannot be dismissed. For now, the authors say that improvements are likely to continue.
Another concern is that the world will run out of training data. Data based on human-generated text is finite and may be exhausted by 2027. The authors counter that synthetic data has become a reliable substitute, particularly now with reasoning models that can generate and verify their own training material. Multimodal data sources also expand the data pool. A bottleneck remains possible, but the weight of the evidence presented suggests that data scarcity is less likely to stop scaling than many critics expect.
Electrical power is a harder challenge to dismiss. On current trajectories, training runs in 2030 will demand entire gigawatts of electricity, comparable to the output of major power plants. Supplying that power will be expensive, and there are questions about whether the grid infrastructure will be ready to absorb the increased demand. The report is optimistic, noting that renewable energy and distributed datacenters can keep the curve alive. But this is perhaps the most credible constraint, and it is worth asking how far companies can stretch supply before costs and public pushback could slow down scaling.

(Shutterstock)
The authors caution that one of the most credible risks to continued scaling could be a retreat in investor sentiment. Scaling AI could become too expensive, forcing developers to pull back. That risk exists, but current revenue growth shows little sign of slowing, the report says. If revenues keep compounding, they could support the hundred-billion-dollar training runs projected for 2030. The numbers may sound fanciful, yet they line up with the potential trillions in productivity gains if AI automates a large amount of work.
Some have suggested algorithmic breakthroughs might replace scaling as the driver of AI. Efficiency has indeed improved, the report says, but always within the same compute growth curve. There is no strong reason to expect algorithms to suddenly outpace hardware scaling, and in practice, new methods usually create more reasons to consume compute, not fewer, the authors say.
Another argument is that AI compute will shift toward inference, particularly as reasoning models take off. Training and inference are in fact growing together, with roughly similar allocations today. Also, better training produces models that make inference more valuable and cost-effective, the authors say. A shift toward inference is possible, the report notes, but it is not likely to totally undermine training scale-ups anytime soon.
Digital Science Could Accelerate, While Physical Science May Lag
The report also explores the impact of AI on improving productivity for scientific research and development. If scaling holds, the biggest gains will be in digital science. In software engineering, the report predicts existing benchmarks such as SWE-bench could be solved by 2026, with tools capable of handling complex scientific coding problems not far behind.

Current benchmark trends suggest that by 2030, AI will be able to autonomously fix issues, implement features, and solve difficult (but well-defined) scientific programming problems, Epoch AI says. (Source: Epoch AI)
Mathematics is also on track for rapid gains. By 2027, AI systems may be able to assist with tasks like formalizing proof sketches and developing argument structures. In biology, AI will increasingly aid in hypothesis generation, the authors say. Systems trained on protein-ligand interaction data already show promise in predicting molecular behavior, and by 2030, these systems could reliably answer complex biological questions. The report cautions that these breakthroughs will remain mostly on the digital side, with more candidate molecules, better predictions, and faster desk research, rather than yielding approved drugs.
Weather prediction is another area that could benefit. AI methods have already outperformed traditional simulations on short to medium-term forecasts, and the report argues that additional data and fine-tuning will further improve model accuracy, especially for rare events.
A limiting factor of AI for science, according to Epoch AI, is not the capability of AI systems but the speed of physical processes. Clinical trials for drugs, regulatory approvals, and the logistics of lab experiments all operate on multi-year cycles. Even if AI suggests breakthrough therapies tomorrow, the medicines approved in 2030 will already be in the pipeline today. This creates a split: digital sciences like math and software will see explosive growth, while experimental sciences will advance at a slower pace.
AI as the New Research Assistant
One of the report’s most concrete predictions is that by 2030, every scientist will have access to an AI assistant comparable to GitHub Copilot. These systems will help with literature review, protein design, and coding, offering 10–20% productivity gains in desk-based fields, and potentially more as the tools mature.

(Shutterstock)
AI assistants for science could also boost accessibility. With AI assistants embedded into research workflows, tasks that once required whole teams of specialists could be democratized to individual researchers and smaller labs, the report says.
The Takeaway
With this report, Epoch AI makes the case that continued scaling could push capabilities far beyond today in a short amount of time. If the scaling curves hold, the largest training runs of 2030 will consume resources on the scale of nations and cost hundreds of billions of dollars. That level of investment is only worthwhile if AI can deliver corresponding productivity gains, and the authors say it plausibly could.
At the same time, the report cautions that AI’s role in science will unfold unevenly. Digital disciplines like software and mathematics stand to benefit most, while biology and other experimental sciences will remain tied to slower approval and testing pipelines. What seems more certain is the emergence of AI assistants as standard research tools, reshaping how knowledge work is done even if tangible results come later.
“By 2030, AI is likely to be a key technology across the economy, present in every facet of people’s interaction with computers and mobile devices. Less certain, but plausibly, AI agents might act as virtual coworkers for many, transforming their work through automation. If these predictions come to pass, then it is vitally important that key decision-makers prioritize AI issues as they navigate the next five years and beyond,” the authors say in their conclusion.
View the entire report at this link.
Related
AI for Science,AI scaling,biology,compute,compute scale,Epoch AI,Google Deepmind,mathematics,report,software engineering,study,weather prediction