OpenAI’s push to build next-generation AI supercomputers has triggered an intense competition among chipmakers. Nvidia (NASDAQ:NVDA), the undisputed GPU leader, has pledged as much as $100 billion to fund OpenAI’s massive data center build out, with the AI company set to fill those facilities with millions of Nvidia chips. AMD, meanwhile, struck its own partnership to deploy about 6 gigawatts worth of its accelerators for OpenAI. AMD stock has surged close to 30% since it announced its OpenAI deal, while Nvidia, too, has soared to near all-time highs, with its market cap hovering around $4.5 trillion. As Nvidia and AMD solidify their roles at the heart of OpenAI’s compute strategy, could Intel (NASDAQ:INTC) – long seen as an outsider in the AI hardware race – surprise with a similarly large partnership with OpenAI?
TOKYO, JAPAN – FEBRUARY 3: Open AI CEO Sam Altman speaks during a talk session with SoftBank Group CEO Masayoshi Son at an event titled “Transforming Business through AI” in Tokyo, Japan, on February 03, 2025. SoftBank and OpenAI announced that they have agreed a partnership to set up a joint venture for artificial intelligence services in Japan today. (Photo by Tomohiro Ohsumi/Getty Images)
Getty Images
INTC stock has jumped meaningfully recently but there is significant risk in relying on a single stock. However, there is a huge value to the broader diversified approach we take with Trefis High Quality Portfolio. Let us ask you this: Over the last 5 years, which index do you think the Trefis High Quality Portfolio outperformed – the S&P 500, S&P 1500 Equal Weighted, or both? The answer might surprise you. See how our advisory framework helps stack the odds in your favor.
From Training to Inference: A Shifting Battlefield
Inference workloads – the stage where trained models generate real-world outputs – could be Intel’s best shot to enter the AI conversation. Training large language models like GPT-4 demands high-end GPUs, an area where cutting edge chips like Nvidia’s H100 and A100 dominate. Related: Nvidia Stock 2x To $350? But once trained, these models must operate efficiently at scale, handling billions of daily queries from AI assistants, recommendation engines, and enterprise tools.
As AI applications scale to hundreds of millions of users, the demand for inference capacity is set to explode. In fact, it is likely that the inference market will be bigger in terms of volumes and total revenue compared to the training space. Here, cost efficiency, availability, and energy performance matter more than just raw computing power. This evolving landscape gives Intel a potential opening, leveraging its manufacturing scale and balanced performance to deliver more affordable AI infrastructure.
Gaudi 3: Intel’s Price-Performance Edge
Intel’s Gaudi 3 AI accelerator highlights its potential. In Dell’s AI platform benchmarks, Gaudi 3 delivered a 70% better price-to-performance ratio in inference throughput on Meta’s Llama 3 80B model compared to Nvidia’s H100 GPU. Priced between $16,000 and $20,000 – roughly half the cost of an H100 – Gaudi 3 might offer a compelling value proposition for AI inference workloads. Also see: AMD, Marvell, Intel: AI Inference Decides The Next Multi-Trillion Chip Stock.
OpenAI’s next phase of expansion is likely to prioritize scaling inference rather than pure training performance, and Intel could emerge as a key enabler of affordable compute. The Gaudi 3’s use of industry-standard Ethernet networking – in contrast to Nvidia’s proprietary InfiniBand and NVLink -may also appeal to customers seeking more flexible and cost-effective data center integration.
The Foundry Factor
Beyond chips, Intel’s foundry ambitions add another layer of opportunity. The company has poured over $90 billion into expanding its manufacturing capacity over the past four years, aiming to close the gap with TSMC and Samsung. While Intel has struggled to attract cutting edge external customers for fabricating accelerated computing chips, the pivot toward inference could change.
Its new Intel 18A node introduces RibbonFET gate-all-around transistors and PowerVia backside power delivery — both designed to boost performance and energy efficiency. PowerVia, in particular, could deliver a meaningful edge for AI inference and high-performance computing by minimizing power loss across vast data centers.
There’s also another factor playing in Intel’s favor: capacity. While TSMC remains the leading fab for cutting-edge chips, its 3nm and 5nm lines are fully booked through 2026, and demand for its 2nm node is already surging. OpenAI and other hyperscalers could soon face supply bottlenecks – a gap Intel’s expanding foundry network may be well positioned to fill.
OpenAI’s Massive Compute Ambition
OpenAI’s ambitions go far beyond incremental upgrades. The company is planning to build one of the largest AI data center networks in history through its “Stargate” infrastructure program, targeting about 10 gigawatts (GW) of power capacity by the end of 2025. OpenAI’s planned $500 billion investment could require tens of millions of GPUs to train and deploy next-generation AI models. Such an undertaking demands not only cutting-edge chips but also diverse and resilient supply chains. With TSMC’s fabs likely to be operating at capacity, and global demand for AI accelerators surging, OpenAI may soon have no choice but to diversify its chip partnerships. Intel’s mix of cost-effective accelerators and advanced manufacturing could put it back in the game. Also see Up 500%, What’s Happening With AVGO Stock?
The Trefis High Quality (HQ) Portfolio, with a collection of 30 stocks, has a track record of comfortably outperforming its benchmark that includes all 3 – S&P 500, Russell, and S&P midcap indexes—and has achieved returns exceeding 91% since its inception.. Why is that? As a group, HQ Portfolio stocks provided better returns with less risk versus the benchmark index; less of a roller-coaster ride, as evident in HQ Portfolio performance metrics.