
(Shutterstock AI Image)
AI is increasingly being positioned as the next big leap in weather forecasting, and rightly so. Recent advances promise faster and more accurate predictions of extreme weather events. In August 2024, NVIDIA’s StormCast model drew headlines for its potential to forecast storms with unprecedented speed and precision.
More recently, Google DeepMind’s GraphCast model attracted attention for outperforming traditional systems in select weather forecasting benchmarks. Such AI-based models have raised hopes that AI could help communities better prepare for floods, hurricanes, and other climate-driven disasters.
However, when the devastating floods hit Texas in July 2025, that promise remained largely unrealized. As floodwaters rose, AI models missed critical cues that could have helped in preventing the damage.
The Trump administration’s proposed $2.2 billion in cuts to NOAA could make it even harder to support the AI research and infrastructure needed to help models like StormCast and GraphCast reach their full potential.
The Texas floods made the limits of today’s AI models painfully clear. As storms intensified over the July 4 weekend, many of the most advanced systems failed to predict the severity of what was coming. It was the traditional high-resolution models, not the AI-driven ones, that gave the most accurate picture of the danger ahead.
“All those new fancy AI models? They missed it too,” said Daniel Swain, a climate scientist at the California Institute for Water Resources, in a live YouTube talk on July 7.

(BEST-BACKGROUNDS/Shutterstock)
Swain pointed out that while AI systems have advanced rapidly, they still fall short when it comes to forecasting extreme weather that develops quickly in small geographic areas. The intense rainfall in the Texas Hill Country was highly localized, and most AI models were not built to handle that kind of detail.
It was NOAA’s high-resolution forecasting tools, built specifically to simulate storm behavior at the local level, that performed better and gave forecasters the earliest signs of serious trouble.
Swain also warned that these models may not be around much longer. The proposed budget would shut down the very research centers responsible for developing those high-performing models. These decisions that could remove some of the most reliable forecasting tools just as climate-driven disasters are becoming more frequent. In his view, the risk is not just losing ground but erasing systems that already work.
NOAA, for its part, has offered a more confident assessment. In a statement, spokesperson Kim Doster said the agency’s research and forecasting priorities would remain intact, even under the proposed budget cuts. She described a broader modernization effort already underway, with AI playing a central role in future forecasts.
Commerce Secretary Howard Lutnick, who oversees NOAA, echoed that message, saying the administration is committed to delivering faster, smarter weather data through advanced technologies. Doster added that NOAA is working closely with scientists to improve storm mapping, reduce alert times, and make better data available to the public.
The agency’s confidence sits uneasily with what’s at stake. The forecasting tools that performed best during the Texas floods came from NOAA’s own research programs. Those same programs could be affected by the proposed budget cuts. While AI remains a major priority, it is unclear how far that work can go if the core systems and teams behind it lose support.

(dewa hartawan/Shutterstock)
It’s worth noting that NOAA has been developing its own AI tools, including Project EAGLE, designed to improve forecasts for high-impact events, but the system was reportedly not operational during the Texas floods, and couldn’t assist in the response.
The failures in Texas weren’t NOAA’s alone. While the agency has taken much of the spotlight, many of the most advanced AI forecasting tools today are being developed in the private sector. Companies are racing to build global prediction models using deep learning and vast datasets. But despite their technical edge, these systems also missed the mark.
“Forecasting precipitation at the local scale is very challenging, and has not really been the focus of most of the AI models in use now,” said Russ Schumacher, Colorado’s state climatologist. That limitation became clear when many private-sector models also failed to detect the scale and timing of the floods.
One issue is that most AI forecasting systems learn from historical weather data. But events like the 2025 Texas floods are rare and extreme, which means there just aren’t many past examples to learn from. Without that reference point, the models have a hard time spotting when something truly out of the ordinary is unfolding.
Another challenge is resolution. Many AI models are designed to capture broad global patterns, not the kind of local storms that lead to flash floods. That limits their ability to detect fast-moving and high-impact events in specific areas. Until AI tools can operate at a finer scale and incorporate real-time data more effectively, their predictions will continue to fall short.

(Quality Stock Arts/Shutterstock)
Still, some experts believe AI has an important role to play in the future of forecasting. Tim Gallaudet, a former acting NOAA administrator, recently argued that the National Weather Service should expand its use of AI across atmospheric, oceanic, and hydrologic models to improve accuracy.
The Texas floods, however, served as a reality check. They showed that even the most advanced models, whether from government labs or private companies, still have trouble with fast-moving, localized disasters. AI is promising, but it is not ready to stand on its own.
For now, the gaps are clear. Improving forecasts will take more than better technology. It will depend on steady investment in the systems, data, and people that help communities prepare before the next crisis hits.