OpenAI has revealed that it has ‘no active plans’ to use Google’s in-house chips (Tensor Processing Units, or TPUs) for its AI products, Reuters reported on June 30.
This comes after The Information put out that OpenAI is planning to use Google’s TPUs to power ChatGPT and other AI products.
However, Reuters quoted OpenAI’s spokesperson as saying the company is in ‘early testing with some of Google’s TPUs’ but it has no plans to deploy them at scale right now.
OpenAI currently ranks as one of NVIDIA’s largest GPU customers for training its AI models. Over the past couple of years, the massive surge in demand for NVIDIA’s GPUs to build AI models has led the chip company to top the US stock markets on several occasions.
However, several companies do intend to reduce their reliance on NVIDIA’s GPUs to improve cost efficiency.
Google utilises its in-house hardware systems, known as TPUs, to train and deploy its AI models.
In the technical report of the Gemini 2.5 models, Google revealed that the model was trained on a massive cluster of its fifth-generation TPUs.
Furthermore, Amazon Web Services (AWS) senior director for customer and product engineering, Gadi Hutt, told CNBC that Project Rainer, the company’s initiative to build an AI supercomputer, will now contain half a million of the company’s in-house Trainium2 chips. This order would have traditionally gone to NVIDIA.
Hutt also said that while NVIDIA’s Blackwell (the current flagship platform) offers better performance than Trainium2, the latter provides better cost performance. The company also claims that Trainium2 offers a 30-40% better price-performance ratio than the current generation of GPUs.
OpenAI is also developing its in-house chips. Commercial Times, a Taiwanese media outlet, recently reported that OpenAI is expected to launch its chip in the fourth quarter of this year. The company is being assisted by Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC) to build the chip.