Meta CEO Mark Zuckerberg laid down the newest marker in generative AI training on Wednesday, saying that the next major release of the company’s Llama model is being trained on a cluster of GPUs that’s “bigger than anything” else that’s been reported.
Llama 4 development is well underway, Zuckerberg told investors and analysts on an earnings call, with an initial launch expected early next year. “We’re training the Llama 4 models on a cluster that is bigger than 100,000 H100s, or bigger than anything that I’ve seen reported for what others are doing,” Zuckerberg said, referring to the Nvidia chips popular for training AI systems. “I expect that the smaller Llama 4 models will be ready first.”
Increasing the scale of AI training with more computing power and data is widely believed to be key to developing significantly more capable AI models. While Meta appears to have the lead now, most of the big players in the field are likely working toward using compute clusters with more than 100,000 advanced chips. In March, Meta and Nvidia shared details about clusters of about 25,000 H100s that were used to develop Llama 3. In July, Elon Musk touted his xAI venture having worked with X and Nvidia to set up 100,000 H100s. “It’s the most powerful AI training cluster in the world!” Musk wrote on X at the time.
On Wednesday, Zuckerberg declined to offer details on Llama 4’s potential advanced capabilities but vaguely referred to “new modalities,” “stronger reasoning,” and “much faster.”
Meta’s approach to AI is proving a wild card in the corporate race for dominance. Llama models can be downloaded in their entirety for free in contrast to the models developed by OpenAI, Google, and most other major companies, which can be accessed only through an API. Llama has proven hugely popular with startups and researchers looking to have complete control over their models, data, and compute costs.
Although touted as “open source” by Meta, the Llama license does impose some restrictions on the model’s commercial use. Meta also does not disclose details of the models’ training, which limits outsiders’ ability to probe how it works. The company released the first version of Llama in July of 2023 and made the latest version, Llama 3.2, available this September.
Managing such a gargantuan array of chips to develop Llama 4 is likely to present unique engineering challenges and require vast amounts of energy. Meta executives on Wednesday sidestepped an analyst question about energy access constraints in parts of the US that have hampered companies’ efforts to develop more powerful AI.
According to one estimate, a cluster of 100,000 H100 chips would require 150 megawatts of power. The largest national lab supercomputer in the United States, El Capitan, by contrast requires 30 megawatts of power. Meta expects to spend as much as $40 billion in capital this year to furnish data centers and other infrastructure, an increase of more than 42 percent from 2023. The company expects even more torrid growth in that spending next year.