Chip manufacturer Nvidia has officially started production of AI supercomputers at multiple U.S. factories operated through partnerships, marking the first such hardware to be built within the country.
On Monday, Nvidia announced that its Blackwell chips have begun production at a factory operated in partnership with TSMC in Phoenix, Arizona, as detailed in a press release. The chip maker also continues to work on the construction of two future factories to be operated in Texas, in partnership with Foxconn in Houston and Wistron in Dallas.
Nvidia says that mass production of the Blackwell GPUs is expected to ramp up at both Texas facilities within the next 12 to 15 months, together with the Arizona factory representing a total of over a million square feet of U.S. manufacturing space.
“The engines of the world’s AI infrastructure are being built in the United States for the first time,” said Jensen Huang, Nvidia Founder and CEO. “Adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency.”
The manufacturer also says that it aims to produce up to $500 billion worth of AI infrastructure in the U.S. within the next four years, through these partnerships and others with Amjor and SPIL. Nvidia also expects the facilities to create hundreds of thousands of jobs, along with driving trillions of dollars in economic security in the decades to come.
NVIDIA CEO Jensen Huang talks about Elon Musk’s prowess in AI across his several companies:
pic.twitter.com/m105zGBvin
— TESLARATI (@Teslarati) January 7, 2025
READ MORE ON NVIDIA: Elon Musk explains reasoning for Nvidia chip re-route from Tesla to X
Blackwell chips were designed for use in high-powered AI data center applications, and the news comes as Elon Musk’s xAI, Tesla, and several other companies are working to expand their infrastructure for supercomputing. It also comes amidst an ongoing tariff war launched by the Trump administration, which is expected to hit a wide range of products, including semiconductor chips.
Musk estimated last year that Nvidia purchases would comprise roughly $3 to $4 billion of Tesla’s $10 billion AI expenditures, and the company spent much of the year constructing a massive supercomputing cluster at its Gigafactory in Texas. The location houses 50,000 Nvidia H100 supercomputing chips, used to help train Tesla’s Full Self-Driving (FSD) system, and Musk also said last June that the facility’s power needs would increase from 130MW to over 500MW in around 18 months.
Additionally, Musk’s xAI began operations at a Memphis, Tennessee facility with 100,000 Nvidia H100 and H200 units last July, and the site is in the process of being expanded to 200,000. Nvidia was also a strategic investor in a $6 billion Series C investment into xAI, alongside fellow GPU manufacturer AMD.
Weeks before the funding was announced in December, it was also reported that xAI had gained a $1.08 billion priority Nvidia order of GB200 AI servers, after Huang was personally approached by Musk.
Musk says xAI has acquired X in $33 billion stock deal