HPE Private Cloud AI, co-developed with NVIDIA, will support feature branch model updates from NVIDIA AI Enterprise and the NVIDIA Enterprise AI Factory validated design.
HPE Alletra Storage MP X10000 offers an SDK for NVIDIA AI Data Platform to streamline unstructured data pipelines for ingestion, inferencing, training and continuous learning.
HPE AI servers rank No.1 in over 50 industry benchmarks and HPE ProLiant Compute DL380a Gen12 will be available to order with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs starting June 4.
HPE OpsRamp Software expands accelerated compute optimization tools to support NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.
HOUSTON, May 19, 2025–(BUSINESS WIRE)–Hewlett Packard Enterprise (NYSE: HPE) announced enhancements to the portfolio of NVIDIA AI Computing by HPE solutions that support the entire AI lifecycle and meet the unique needs of enterprises, service providers, sovereigns and research & discovery organizations. These updates deepen integrations with NVIDIA AI Enterprise – expanding support for HPE Private Cloud AI with accelerated compute, launching HPE Alletra Storage MP X10000 software development kit (SDK) for NVIDIA AI Data Platform. HPE is also releasing compute and software offerings with NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPU and NVIDIA Enterprise AI Factory validated design.
“Our strong collaboration with NVIDIA continues to drive transformative outcomes for our shared customers,” said Antonio Neri, president and CEO of HPE. “By co-engineering cutting-edge AI technologies elevated by HPE’s robust solutions, we are empowering businesses to harness the full potential of these advancements throughout their organization, no matter where they are on their AI journey. Together, we are meeting the demands of today, while paving the way for an AI-driven future.”
“Enterprises can build the most advanced NVIDIA AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI,” said Jensen Huang, founder and CEO of NVIDIA. “Together, NVIDIA and HPE are laying the foundation for businesses to harness intelligence as a new industrial resource that scales from the data center to the cloud and the edge.”
HPE Private Cloud AI adds feature branch support for NVIDIA AI Enterprise
HPE Private Cloud AI, a turnkey, cloud-based AI factory co-developed with NVIDIA, includes a dedicated developer solution that helps customers proliferate unified AI strategies across the business, enabling more profitable workloads and significantly reducing risk. To further aid AI developers, HPE Private Cloud AI will support feature branch model updates from NVIDIA AI Enterprise, which include AI frameworks, NVIDIA NIM microservices for pre-trained models, and SDKs. Feature branch model support will allow developers to test and validate software features and optimizations for AI workloads . In combination with existing support of production branch models that feature built-in guardrails, HPE Private Cloud AI will enable businesses of every size to build developer systems and scale to production-ready agentic and generative AI (GenAI) applications while adopting a safe, multi-layered approach across the enterprise.
Story Continues
HPE Private Cloud AI, a full-stack solution for agentic and GenAI workloads, will support the NVIDIA Enterprise AI Factory validated design.
HPE’s newest storage solution supports NVIDIA AI Data Platform
HPE Alletra Storage MP X10000 will introduce an SDK which works with the NVIDIA AI Data Platform reference design. Connecting HPE’s newest data platform with NVIDIA’s customizable reference design will offer customers accelerated performance and intelligent pipeline orchestration to enable agentic AI. A part of HPE’s growing data intelligence strategy, the new X10000 SDK enables the integration of context-rich, AI-ready data directly into the NVIDIA AI ecosystem. This empowers enterprises to streamline unstructured data pipelines for ingestion, inference, training, and continuous learning across NVIDIA-accelerated infrastructure. Primary benefits of the SDK integration include:
Unlocking data value through flexible inline data processing, vector indexing, metadata enrichment, and data management.
Driving efficiency with remote direct memory access (RDMA) transfers between GPU memory, system memory, and the X10000 to accelerate the data path with the NVIDIA AI Data Platform.
Right-sizing deployments with modular, composable building blocks of the X10000, enabling customers to scale capacity and performance independently to align with workload requirements.
Customers will be able to use raw enterprise data to inform agentic AI applications and tools by seamlessly unifying storage and intelligence layers through RDMA transfers. Together, HPE is working with NVIDIA to enable a new era of real-time, intelligent data access for customers from the edge to the core to the cloud.
Additional updates about this integration will be announced at HPE Discover Las Vegas 2025.
Industry-leading AI server levels up with NVIDIA RTX PRO 6000 Blackwell support
HPE ProLiant Compute DL380a Gen12 servers featuring NVIDIA H100 NVL, H200 NVL and L40S GPUs topped the latest round of MLPerf Inference: Datacenter v5.0 benchmarks in 10 tests, including GPT-J, Llama2-70B, ResNet50 and RetinaNet. This industry-leading AI server will soon be available with up to 10 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, which will provide enhanced capabilities and deliver exceptional performance for enterprise AI workloads, including agentic multimodal AI inference, physical AI, model fine tuning, as well as design, graphics and video applications. Key features include:
Advanced cooling options: HPE ProLiant Compute DL380a Gen12 is available in both air-cooled and direct liquid-cooled (DLC) options, supported by HPE’s industry-leading liquid cooling expertise1, to maintain optimal performance under heavy workloads.
Enhanced security: HPE Integrated Lights Out (iLO) 7, embedded in the HPE ProLiant Compute Gen12 portfolio, features built-in safeguards based on Silicon Root of Trust and enables the first servers with post-quantum cryptography readiness and that meet the requirements for FIPS 140-3 Level 3 certification, a high-level cryptographic security standard.
Operations management: HPE Compute Ops Management provides secure and automated lifecycle management for server environments featuring proactive alerts and predictive AI-driven insights that inform increased energy efficiency and global system health.
Two additional servers topped MLPerf Inference v5.0 benchmarks, providing third-party validation of HPE’s strong leadership in AI innovation, showcasing the superior capabilities of the HPE AI Factory. Together with the HPE ProLiant Compute DL380a Gen12, these systems lead in more than 50 scenarios. Highlights include:
HPE ProLiant Compute DL384 Gen12 server, featuring the dual-socket NVIDIA GH200 NVL2, ranked first in four tests including Llama2-70B and Mixtral-8x7B.
HPE Cray XD670 server, with 8 NVIDIA H200 SXM GPUs, achieved the top ranking in 30 different scenarios, including large language models (LLMs) and computer vision tasks.
Advancing AI infrastructure with new accelerated compute optimization
HPE OpsRamp Software is expanding its AI infrastructure optimization solutions to support the upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs for AI workloads. This software-as-a-service (SaaS) solution from HPE will help enterprise IT teams streamline operations as they deploy, monitor and optimize distributed AI infrastructure across hybrid environments. HPE OpsRamp enables full-stack AI workload-to-infrastructure observability, workflow automation, as well as AI-powered analytics and event management. Deep integration with NVIDIA infrastructure – including NVIDIA accelerated computing, NVIDIA BlueField, NVIDIA Quantum InfiniBand and Spectrum-X Ethernet networking and NVIDIA Base Command Manager – provide granular metrics to monitor the performance and resilience of AI infrastructure.
HPE OpsRamp gives IT teams the ability to:
Observe overall health and performance of AI infrastructure by monitoring GPU temperature, utilization, memory usage, power consumption, clock speeds and fan speeds.
Optimize job scheduling and resources by tracking GPU and CPU utilization across the clusters.
Automate responses to certain events, for example, reducing clock speed or powering down a GPU to prevent damage.
Predict future resource needs and optimize resource allocation by analyzing historical performance and utilization data.
Monitor power consumption and resource utilization in order optimize costs for large AI deployments.
Availability
HPE Private Cloud AI will add feature branch support for NVIDIA AI Enterprise by Summer.
HPE Alletra Storage MP X10000 SDK and direct memory access to NVIDIA accelerated computing infrastructure will be available starting Summer 2025.
HPE ProLiant Compute DL380a Gen12 with NVIDIA RTX PRO 6000 Server Edition will be available to order starting June 4, 2025.
HPE OpsRamp Software will be time-to-market to support NVIDIA RTX PRO 6000 Server Edition.
Additional Resources:
About Hewlett Packard Enterprise
Hewlett Packard Enterprise (NYSE: HPE) is a global technology leader focused on developing intelligent solutions that allow customers to capture, analyze, and act upon data seamlessly. The company innovates across networking, hybrid cloud, and AI to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com.
____________________
1HPE has built and delivered the world’s fastest direct-liquid cooled supercomputers per the November 2024 TOP500 list.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250518110768/en/
Contacts
Media Contact: Cristina Thai cristina.thai@hpe.com