
Shutterstock
At Computex 2025, Nvidia unveiled a multi-front strategy to expand its role in global AI infrastructure. Going beyond the typical performance benchmarks or chip shipments announcements, Nvidia outlined a broader vision. One that is centered on enabling custom systems, fostering international alliances, and embedding itself deeper into the architecture of modern compute.
The chip manufacturer is already a key player in the hardware space, it is now positioning itself as a global infrastructure enabler, shaping not just how AI runs, but where it is built and how it operates. That shift was underscored by the launch of NVLink Fusion, a new interconnect designed to give cloud providers and chipmakers more flexibility in how they build AI systems around Nvidia’s platforms.
With NVLink Fusion, Nvidia has opened its closed ecosystem to allow custom CPUs and AI accelerators from other companies to connect directly to Nvidia GPUs. MediaTek, Marvell, Alchip Technologies, Astera Labs, Synopsys, and Cadence are among the first to adopt NVLink Fusion.

Shutterstock
In addition to hardware advancements, Nvidia is introducing Mission Control, a software platform designed to manage AI data center operations efficiently. Together with the high-speed connectivity provided by NVLink Fusion, these developments are not only about improving performance but also about enabling hyperscalers and governments to develop AI factories with minimized reliance on U.S. cloud providers.
These advancements signal Nvidia’s strategic transition from proprietary full-stack solutions to a more open, ecosystem-driven AI infrastructure model.
“A tectonic shift is underway: for the first time in decades, data centers must be fundamentally rearchitected — AI is being fused into every computing platform,” said Jensen Huang, founder and CEO of NVIDIA. “NVLink Fusion opens NVIDIA’s AI platform and rich ecosystem for partners to build specialized AI infrastructures. This incredible body of work now becomes flexible and open for anybody to integrate into.”
The new interconnect powers Nvidia’s GB200-based rack-scale systems. The company claims that this delivers up to 1.8 TB/s per GPU – 14 times faster than PCIe Gen5, which is the current industry-standard interface. This is the kind of tightly integrated compute infrastructure needed to train next-generation AI models and power full-scale AI factories.

Computex
Nvidia is not only broadening the scope of who can build with its technology, but it is also making some calculated bets on where its infrastructure should take root. The company is planning to make significant investments in Taiwan, which is home to Nvidia’s key manufacturing partner TSMC, and a growing hub of AI and semiconductor innovation.
Huang revealed that NVIDIA is strengthening its partnership with Foxconn Hon Hai Technology Group and collaborating with the Taiwan government to develop a powerful AI factory supercomputer. This initiative will provide cutting-edge NVIDIA Blackwell technology to researchers, startups, and key industries, including TSMC. “Having a world-class AI infrastructure here in Taiwan is really important,” Huang said.
Additionally, Nvidia plans to establish a new local headquarters named “Nvidia Constellation” in Taipei’s Beitou Shilin Science Park, further embedding itself in Taiwan’s tech ecosystem. During his keynote address at Computex, Huang introduced a dramatic video showing NVIDIA’s Santa Clara office launching into space and landing in Taiwan.
The news of ambitious expansion plans in Taiwan comes at a time when Nvidia is under pressure from the U.S. government to limit advanced chip exports to China. Some industry experts view the Taiwan investments as strategic alignment to mitigate potential trade restrictions and maintain operational resilience.
At Computex, Haung praised the AI advancements made by DeepSeek. His remarks serve as an acknowledgment, more like validation, that AI innovation is becoming more decentralized, and that non-Western AI projects and initiatives are growing in sophistication. Given that U.S. lawmakers recently accused DeepSeek of data harvesting, AI theft, and espionage, Huang’s recognition is notable. It suggests Nvidia is taking a more pragmatic approach that prioritizes technological collaboration over geopolitical tensions.

Shutterstock
While Nvidia is forging international collaborations to advance AI infrastructure, it’s also focusing on empowering individual developers and researchers. The company introduced DXG Sparx – a compact AI workstation designed to deliver data center-level performance directly to the desktop.
The DGX Spark is built around the GB10 Grace Blackwell Superchip, which Nvidia says delivers up to 1,000 AI TOPS and comes with 128GB of unified memory. This setup lets developers work with large models directly on their own machines instead of depending on cloud-based resources.
With its compact design and energy-efficient performance, the DGX Spark is well-suited for researchers and developers working with limited resources. Nvidia says it can process models with up to 200 billion parameters, enabling sophisticated AI tasks like fine-tuning and inference without relying on external computing power.
“AI has revolutionized every layer of the computing stack — from silicon to software,” said Huang. “Direct descendants of the DGX-1 system that ignited the AI revolution, DGX Spark and DGX Station are created from the ground up to power the next generation of AI research and development.”
The announcements at Computex 2025 reflect Nvidia’s strategic shift from a GPU-centric company to a global AI infrastructure leader. By embracing open standards, forging international partnerships, and investing in regional AI ecosystems, Nvidia is solidifying its role at the heart of the world’s digital transformation.
Related
AI chips,Blackwell,Computex,Fusion,GPU,Huang,Jensen,NVIDIA,NVLink,processors,Taiwan