EDMONTON, CANADA – JANUARY 28: The DeepSeek logo is displayed on three cell phones in front of a computer screen showing Nvidia CEO Jensen Huang holding Nvidia’s latest chip, on January 28, 2025, in Edmonton, Canada. (Photo by Artur Widak/NurPhoto via Getty Images)
NurPhoto via Getty Images
The global AI race reached a new inflection point in the past weeks. OpenAI struck a multi-year deal to deploy 6 gigawatts of AMD GPUs, while Nvidia invested $5 billion in Intel to expand chip packaging capacity. Together, the deals reveal how the U.S. ecosystem of model developers, chipmakers, and cloud giants is evolving into a tightly interdependent network—each financing the other’s capacity in a trillion-dollar loop. Across the Pacific, meanwhile, China’s leading AI firms are taking a very different path: open-sourcing their models, optimizing for local chips, and trading scale for adaptability.
Control the Stack—or Be the Stack
For the American giants, the strategy is clear: control the full stack of AI production, from chips to compute to models. The OpenAI–AMD pact locks in long-term access to Instinct GPUs—starting with MI450s in 2026—and gives OpenAI leverage to diversify away from Nvidia’s dominant supply. For AMD, it marks a defining moment. After years in Nvidia’s shadow, the company finally lands a marquee AI customer and a multi-generation commitment that validates its hardware and software roadmap.
Nvidia, meanwhile, is hedging its own risk. Its $5 billion investment in Intel is as much a bet on supply chain resilience as on future chip technology. Intel’s advanced packaging methods—Foveros and EMIB—have become essential for scaling GPU throughput. By injecting capital, Nvidia ensures a new channel for packaging capacity outside its dependence on TSMC. For Intel, the partnership restores relevance in an AI landscape that had largely passed it by.
Oracle is quietly emerging as the fourth node in this circle. The cloud giant has deepened partnerships with Nvidia and reportedly signed multi-year infrastructure deals with OpenAI that could total hundreds of billions over time. Oracle’s aim is straightforward: turn raw GPU capacity into predictable AI services through NVIDIA AI Enterprise and NIM microservices built directly into Oracle Cloud Infrastructure. These moves mark the arrival of the “AI factory”—a vertically integrated supply chain where data, chips, and compute are financed together rather than bought off the shelf.
The Circular Economy of Silicon
This new model of AI financing is as much about cash flow as it is about compute. The Financial Times recently described it as a trillion-dollar web of interlocking commitments: OpenAI pays AMD for chips; AMD reinvests in new fabs and packaging; Nvidia funds Intel to expand assembly; Oracle pre-purchases GPU clusters to serve AI clients like OpenAI. Each player’s balance sheet supports another’s growth.
The risk, however, is systemic. When everyone’s revenue depends on everyone else’s delivery, a single delay—whether in wafer supply, packaging, or power availability—can ripple across the entire sector. The structure resembles early-2000s telecom finance, when long-term capacity pre-purchases inflated valuations faster than real demand. None of these deals signal a bubble outright, but they do show how quickly AI’s industrial phase has turned into a game of leverage and long-duration contracts.
China’s Countermove: Open Source and Frugality
While American companies are locking themselves into long-term, capital-heavy alliances, China’s AI players are doubling down on open-source efficiency. With export controls limiting access to Nvidia’s highest-end GPUs, Chinese firms are maximizing output from domestic silicon and optimizing their models for mixed hardware environments.
Tencent’s Hunyuan suite has become the country’s flagship in this effort. Its Hunyuan Image 3.0—an 80-billion-parameter text-to-image system—was recently released with open weights and a commercial license, making it one of the largest open models in the world. Its multimodal sibling, Hunyuan-Large-Vision, now tops Chinese leaderboards on the OpenCompass benchmark, proving that open architectures can compete head-to-head with proprietary Western systems.DeepSeek, another rising name, is China’s most prominent reasoning model. The open-weight DeepSeek-R1 has been praised for near-parity performance in math and code generation, echoing the capabilities of much larger closed models. It has inspired a domestic “open-weight movement” that prizes transparency and reproducibility over corporate secrecy.Kimi, developed by Moonshot AI, is pushing a trillion-parameter Mixture-of-Experts design that activates only about 32 billion parameters per inference. This architecture dramatically reduces compute requirements, aligning with China’s pragmatic approach to constrained hardware.
Alibaba’s Qwen models complete the picture—high-performing, open-sourced, and tuned for downstream integration across industries. In aggregate, these projects form a distinct strategy: fewer megadeals, more modular innovation. With limited access to cutting-edge chips, China’s ecosystem is learning to do more with less—and in the process, it’s lowering the cost of AI experimentation for thousands of startups.
Diverging Philosophies, Same Destination
Both ecosystems share a singular goal: dominance in the next era of AI infrastructure. But their philosophies diverge sharply.
The U.S. model is capital-intensive and vertically integrated. It depends on vast, centralized “AI factories” run by a handful of companies—OpenAI, Microsoft, Nvidia, and Oracle—that coordinate production, finance, and deployment at unprecedented scale. It’s a model designed for control and speed, but one that magnifies exposure to market and supply shocks.
China’s model is distributed and software-driven. By open-sourcing foundation models and emphasizing low-cost adaptability, it spreads innovation across a broader base of contributors. This lowers barriers to entry and dilutes systemic risk. It’s less about owning the entire stack and more about ensuring that no single chokepoint—be it a U.S. export control or a GPU shortage—can derail progress.
The New Chokepoint: Packaging and Power
If the last few years were about GPUs, the next battle will be about packaging and power. Nvidia’s move into Intel shows that the bottleneck has shifted from chip design to physical integration and electricity. Advanced packaging—where multiple chips are stacked and connected with high-bandwidth memory—is now the constraint on global AI capacity. Intel’s 18A process and Foveros technology could become the industry’s next critical resource.
Power is the other limit. OpenAI’s 6GW AMD order implies data centers the size of small cities. The buildout will strain power grids from Virginia to Singapore. Hyperscalers are already exploring direct partnerships with utilities, nuclear startups, and energy traders to secure long-term supply. This convergence of AI and energy finance is reshaping the economics of both industries.
Outlook: Sustainability and the Next Frontier
In the near term, OpenAI’s AMD partnership will test whether the industry can truly support a multi-vendor AI stack. AMD must close its software gap with Nvidia’s CUDA ecosystem—ROCm, compilers, and developer tools will determine how fast new models can move from prototype to production. Nvidia, for its part, will use its Intel partnership to deepen control over packaging and stay ahead in throughput per watt.
China’s open-source strategy will keep gaining ground, especially as domestic regulators begin to favor transparent, locally auditable models for government and enterprise use. If Tencent, DeepSeek, and Moonshot maintain their current pace of iteration, they could reshape Asia’s AI supply chain around openness rather than exclusivity.
The big question is whether the U.S. circular megadeal system is sustainable. When capital leads the revolution, corrections can be brutal—but they also clear the field for the durable players. The AI arms race now looks less like a sprint for intelligence and more like a global infrastructure contest. Whoever balances scale with resilience—whether through trillion-dollar GPU factories or lightweight open models—will define the next decade of technology.