Welcome to Eye on AI! In this edition…OpenAI releases report outlining efforts to block malicious use of its tools…Amazon continues its AI data center push in the South, with plans to spend $10 billion in North Carolina…Reddit sues Anthropic, accusing it of stealing data.
After spending a few days in Washington, D.C. this week, it’s clear that “Big AI”—my shorthand for companies including Google, OpenAI, Meta, Anthropic, and xAI that are building and deploying the most powerful AI models—isn’t just present in the nation’s capital. It’s being welcomed with open arms.
Government agencies are eager to deploy their models, integrate their tools, and form public-private partnerships that will ultimately shape policy, national security, and global strategy inside the Beltway. And frontier AI companies, which also serve millions of consumer and business customers, are ready and willing to do business with the U.S. government. For example, just today Anthropic announced a new set of AI models tailored for U.S. national security customers, while Meta recently revealed that it’s making its Llama models available to defense partners.
This week, former Google CEO Eric Schmidt was a big part of bringing Silicon Valley and Washington together. I attended an AI Expo that served up his worldview, which sees artificial intelligence, business, geopolitics, and national defense as interconnected forces reshaping America’s global strategy (which will be chock-full of drones and robots if he gets his way). I also dressed up for a gala event hosted by the Washington AI Network, with sponsors including OpenAI, Meta, Microsoft, and Amazon, as well as a keynote speech from U.S. Commerce Secretary Howard Lutnick.
A parallel AI universe in D.C.
Both events felt like a parallel AI universe to this D.C. outsider: In this universe, discussions about AI are less about increasing productivity or displacing jobs, and more about technological supremacy and national survival. Winning the AI “race” against China is front and center. Public-private partnerships are not just desirable—they’re essential to help the U.S. maintain an edge in AI, cyber, and intelligence systems.
I heard no references to Elon Musk and DOGE’s “move fast and break things” mode of implementing AI tools into the IRS or the Veterans Administration. There were no discussions about AI models and copyright concerns. No one was hand-wringing about Anthropic’s new model blackmailing its way out of being shut down.
Instead, at the AI Expo, senior leaders from the U.S. military talked about how the recent Ukrainian drone attacks on Russian air bases are prime examples of how rapidly AI is changing the battlefield. Federal procurement experts discussed how to accelerate the Pentagon’s notoriously slow acquisition process to keep pace with commercial AI advances. OpenAI touted its o3 reasoning model, now deployed on a secure government supercomputer at Los Alamos National Laboratory.
At the gala, Lutnick made the stakes explicit: “We must win the AI race, the quantum race—these are not things that are open for discussion.” To that end, he added, the Trump administration is focused on building another terawatt of power to support the massive AI data centers sprouting up across the country. “We are very, very, very bullish on AI,” he said.
The audience—packed with D.C.-based policymakers and lobbyists from Big AI—applauded. Washington may not be a tech town, but if this week was any indication, Silicon Valley and the nation’s capital are learning to speak the same language.
Critics push back on growing convergence of AI and D.C.
Still, the growing convergence of Silicon Valley and Washington makes many observers uneasy—especially given that it’s been just seven years since thousands of Google employees protested the company’s involvement in a Pentagon AI project, ultimately forcing it to back out. At the time, Google even pledged not to use its AI for weapons or surveillance systems that violated “internationally accepted norms.”
On Tuesday, the AI Now Institute, a research and advocacy nonprofit that studies the social implications of AI, released a report that accused AI companies of “pushing out shiny objects to detract from the business reality while they desperately try to derisk their portfolios through government subsidies and steady public-sector (often carceral or military) contracts.” The organization says the public needs “to reckon with the ways in which today’s AI isn’t just being used by us, it’s being used on us.”
But the parallel AI universe I witnessed—where Big AI and the D.C. establishment are fusing interests—is already realigning power and policy. The biggest question now is whether they’re doing so safely, transparently, and in the public interest—or simply in their own.
The race is on.
With that, here’s the rest of the AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
This story was originally featured on Fortune.com