GPU cloud provider CoreWeave has made Nvidia GB200 systems available at scale.
The company announced the wide availability of the Grace Blackwell systems on April 15. Early customers are already using the offering, including IBM, Mistral AI, and Cohere.
According to a blog post from Nvidia, CoreWeave has deployed “thousands” of Nvidia Blackwell GPUs, adding: “Systems built on Nvidia Grace Blackwell are in full production, transforming cloud data centers into AI factories that manufacture intelligence at scale and convert raw data into real-time insights with speed, accuracy, and efficiency.”
Michael Intrator, co-founder and CEO of CoreWeave, added: “CoreWeave is built to move faster – and time and again, we’ve proven it by being first to operationalize the most advanced systems at scale. Today is a testament to our engineering prowess and velocity, as well as our relentless focus on enabling the next generation of AI.”
Ian Buck, VP of hyperscale and HPC at Nvidia, added: “Enterprises and organizations around the world are racing to turn reasoning models into agentic AI applications that will transform the way people work and play. CoreWeave’s rapid deployment of Nvidia GB200 systems delivers the AI infrastructure and software that are making AI factories a reality.”
Cohere is using the GB200 systems for its personalized AI agents offering, North. According to Nvidia, the company is seeing up to three times more performance in training for 100 billion-parameter models compared with previous-generation Nvidia Hopper GPUs, even without Blackwell-specific optimizations, the company claimed.
IBM, meanwhile, is using CoreWeave’s GB200 systems to train its next generation Granite Models.
“We are excited to see the acceleration that Nvidia GB200 NVL72 can bring to training our Granite family of models,” said Sriram Raghavan, vice president of AI at IBM Research. “This collaboration with CoreWeave will augment IBM’s capabilities to help build advanced, high-performance and cost-efficient models for powering enterprise and agentic AI applications with IBM watsonx.”
CoreWeave began offering the Nvidia GB200 NVL72 instances in February of this year, initially from the company’s US-West-01 region.
The GB200 NVL72-based instances on CoreWeave connect 36 Nvidia Grace CPUs and 72 Nvidia Blackwell GPUs in a liquid-cooled, rack-scale design, are available as bare-metal instances through CoreWeave Kubernetes Service, and are scalable up to 110,000 GPUs.
Additionally, the CoreWeave platform features Nvidia BlueField-3 DPUs, enabling multi-tenant cloud networking, accelerated data access, and elastic GPU computing.
CoreWeave separates its geographies into Geo – a continent or country, Super Region – a large section of a Geo with lots of regions, Region – an area with multiple Availability Zones which are made of one or more data halls that operate independently from each other.
According to CoreWeave’s region information, only a small portion of its regions and AZs are for “General Access” while the rest are “Dedicated Access” – AKA, for a dedicated customer.
In the US, the company’s RN02A, US-West 01A and 04A, and US-East 01A, 02A, 04A, 06A, and 08A are “General Access.” The other AZ in the US, and currently all in Europe, are Dedicated Access only.
In March 2025, the company announced it would deploy a cluster of GB200s at a Bulk Infrastructure data center in Norway.
CoreWeave began trading on the Nasdaq stock exchange at the end of March 2025 with shares priced at $40 each for a total potential raise of close to $1.5bn.