Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Paper page – Visual Planning: Let’s Think Only with Images

Stability AI and Arm Release Lightweight Tex-to-Audio Model Optimised for Fast On-Device Generation

Mistral Unveils Medium 3: Enterprise-Ready Language Model

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Factory Portfolio
NVIDIA AI

Hewlett Packard Enterprise Deepens Integration with NVIDIA on AI Factory Portfolio

Advanced AI BotBy Advanced AI BotMay 19, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


HPE Private Cloud AI, co-developed with NVIDIA, will support feature branch model updates from NVIDIA AI Enterprise and the NVIDIA Enterprise AI Factory validated design.

HPE Alletra Storage MP X10000 offers an SDK for NVIDIA AI Data Platform to streamline unstructured data pipelines for ingestion, inferencing, training and continuous learning.

HPE AI servers rank No.1 in over 50 industry benchmarks and HPE ProLiant Compute DL380a Gen12 will be available to order with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs starting June 4.

HPE OpsRamp Software expands accelerated compute optimization tools to support NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.

HOUSTON, May 19, 2025–(BUSINESS WIRE)–Hewlett Packard Enterprise (NYSE: HPE) announced enhancements to the portfolio of NVIDIA AI Computing by HPE solutions that support the entire AI lifecycle and meet the unique needs of enterprises, service providers, sovereigns and research & discovery organizations. These updates deepen integrations with NVIDIA AI Enterprise – expanding support for HPE Private Cloud AI with accelerated compute, launching HPE Alletra Storage MP X10000 software development kit (SDK) for NVIDIA AI Data Platform. HPE is also releasing compute and software offerings with NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPU and NVIDIA Enterprise AI Factory validated design.

“Our strong collaboration with NVIDIA continues to drive transformative outcomes for our shared customers,” said Antonio Neri, president and CEO of HPE. “By co-engineering cutting-edge AI technologies elevated by HPE’s robust solutions, we are empowering businesses to harness the full potential of these advancements throughout their organization, no matter where they are on their AI journey. Together, we are meeting the demands of today, while paving the way for an AI-driven future.”

“Enterprises can build the most advanced NVIDIA AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI,” said Jensen Huang, founder and CEO of NVIDIA. “Together, NVIDIA and HPE are laying the foundation for businesses to harness intelligence as a new industrial resource that scales from the data center to the cloud and the edge.”

HPE Private Cloud AI adds feature branch support for NVIDIA AI Enterprise

HPE Private Cloud AI, a turnkey, cloud-based AI factory co-developed with NVIDIA, includes a dedicated developer solution that helps customers proliferate unified AI strategies across the business, enabling more profitable workloads and significantly reducing risk. To further aid AI developers, HPE Private Cloud AI will support feature branch model updates from NVIDIA AI Enterprise, which include AI frameworks, NVIDIA NIM microservices for pre-trained models, and SDKs. Feature branch model support will allow developers to test and validate software features and optimizations for AI workloads . In combination with existing support of production branch models that feature built-in guardrails, HPE Private Cloud AI will enable businesses of every size to build developer systems and scale to production-ready agentic and generative AI (GenAI) applications while adopting a safe, multi-layered approach across the enterprise.

Story Continues

HPE Private Cloud AI, a full-stack solution for agentic and GenAI workloads, will support the NVIDIA Enterprise AI Factory validated design.

HPE’s newest storage solution supports NVIDIA AI Data Platform

HPE Alletra Storage MP X10000 will introduce an SDK which works with the NVIDIA AI Data Platform reference design. Connecting HPE’s newest data platform with NVIDIA’s customizable reference design will offer customers accelerated performance and intelligent pipeline orchestration to enable agentic AI. A part of HPE’s growing data intelligence strategy, the new X10000 SDK enables the integration of context-rich, AI-ready data directly into the NVIDIA AI ecosystem. This empowers enterprises to streamline unstructured data pipelines for ingestion, inference, training, and continuous learning across NVIDIA-accelerated infrastructure. Primary benefits of the SDK integration include:

Unlocking data value through flexible inline data processing, vector indexing, metadata enrichment, and data management.

Driving efficiency with remote direct memory access (RDMA) transfers between GPU memory, system memory, and the X10000 to accelerate the data path with the NVIDIA AI Data Platform.

Right-sizing deployments with modular, composable building blocks of the X10000, enabling customers to scale capacity and performance independently to align with workload requirements.

Customers will be able to use raw enterprise data to inform agentic AI applications and tools by seamlessly unifying storage and intelligence layers through RDMA transfers. Together, HPE is working with NVIDIA to enable a new era of real-time, intelligent data access for customers from the edge to the core to the cloud.

Additional updates about this integration will be announced at HPE Discover Las Vegas 2025.

Industry-leading AI server levels up with NVIDIA RTX PRO 6000 Blackwell support

HPE ProLiant Compute DL380a Gen12 servers featuring NVIDIA H100 NVL, H200 NVL and L40S GPUs topped the latest round of MLPerf Inference: Datacenter v5.0 benchmarks in 10 tests, including GPT-J, Llama2-70B, ResNet50 and RetinaNet. This industry-leading AI server will soon be available with up to 10 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, which will provide enhanced capabilities and deliver exceptional performance for enterprise AI workloads, including agentic multimodal AI inference, physical AI, model fine tuning, as well as design, graphics and video applications. Key features include:

Advanced cooling options: HPE ProLiant Compute DL380a Gen12 is available in both air-cooled and direct liquid-cooled (DLC) options, supported by HPE’s industry-leading liquid cooling expertise1, to maintain optimal performance under heavy workloads.

Enhanced security: HPE Integrated Lights Out (iLO) 7, embedded in the HPE ProLiant Compute Gen12 portfolio, features built-in safeguards based on Silicon Root of Trust and enables the first servers with post-quantum cryptography readiness and that meet the requirements for FIPS 140-3 Level 3 certification, a high-level cryptographic security standard.

Operations management: HPE Compute Ops Management provides secure and automated lifecycle management for server environments featuring proactive alerts and predictive AI-driven insights that inform increased energy efficiency and global system health.

Two additional servers topped MLPerf Inference v5.0 benchmarks, providing third-party validation of HPE’s strong leadership in AI innovation, showcasing the superior capabilities of the HPE AI Factory. Together with the HPE ProLiant Compute DL380a Gen12, these systems lead in more than 50 scenarios. Highlights include:

HPE ProLiant Compute DL384 Gen12 server, featuring the dual-socket NVIDIA GH200 NVL2, ranked first in four tests including Llama2-70B and Mixtral-8x7B.

HPE Cray XD670 server, with 8 NVIDIA H200 SXM GPUs, achieved the top ranking in 30 different scenarios, including large language models (LLMs) and computer vision tasks.

Advancing AI infrastructure with new accelerated compute optimization

HPE OpsRamp Software is expanding its AI infrastructure optimization solutions to support the upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs for AI workloads. This software-as-a-service (SaaS) solution from HPE will help enterprise IT teams streamline operations as they deploy, monitor and optimize distributed AI infrastructure across hybrid environments. HPE OpsRamp enables full-stack AI workload-to-infrastructure observability, workflow automation, as well as AI-powered analytics and event management. Deep integration with NVIDIA infrastructure – including NVIDIA accelerated computing, NVIDIA BlueField, NVIDIA Quantum InfiniBand and Spectrum-X Ethernet networking and NVIDIA Base Command Manager – provide granular metrics to monitor the performance and resilience of AI infrastructure.

HPE OpsRamp gives IT teams the ability to:

Observe overall health and performance of AI infrastructure by monitoring GPU temperature, utilization, memory usage, power consumption, clock speeds and fan speeds.

Optimize job scheduling and resources by tracking GPU and CPU utilization across the clusters.

Automate responses to certain events, for example, reducing clock speed or powering down a GPU to prevent damage.

Predict future resource needs and optimize resource allocation by analyzing historical performance and utilization data.

Monitor power consumption and resource utilization in order optimize costs for large AI deployments.

Availability

HPE Private Cloud AI will add feature branch support for NVIDIA AI Enterprise by Summer.

HPE Alletra Storage MP X10000 SDK and direct memory access to NVIDIA accelerated computing infrastructure will be available starting Summer 2025.

HPE ProLiant Compute DL380a Gen12 with NVIDIA RTX PRO 6000 Server Edition will be available to order starting June 4, 2025.

HPE OpsRamp Software will be time-to-market to support NVIDIA RTX PRO 6000 Server Edition.

Additional Resources:

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (NYSE: HPE) is a global technology leader focused on developing intelligent solutions that allow customers to capture, analyze, and act upon data seamlessly. The company innovates across networking, hybrid cloud, and AI to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com.

____________________

1HPE has built and delivered the world’s fastest direct-liquid cooled supercomputers per the November 2024 TOP500 list.

 

View source version on businesswire.com: https://www.businesswire.com/news/home/20250518110768/en/

Contacts

Media Contact:
Cristina Thai
cristina.thai@hpe.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAnthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit
Next Article Baidu AI patent application reveals plans for turning animal sounds into words
Advanced AI Bot
  • Website

Related Posts

NetApp Builds AI Infrastructure on NVIDIA AI Data Platform

May 19, 2025

Nvidia CEO Jensen Huang to Showcase AI Innovations at Computex 2025

May 19, 2025

Microsoft and Oracle to Share 400K Nvidia AI Chips for UAE Data Centers

May 19, 2025
Leave A Reply Cancel Reply

Latest Posts

The Black Keys Gear Up For A Big Year

20 Winners Of 1839 Photo Awards

Anoushka Shankar Curates A Celebration Of Indian Art And Performance

Wiz Khalifa On Weed, Jazz, Fans And More

Latest Posts

Paper page – Visual Planning: Let’s Think Only with Images

May 19, 2025

Stability AI and Arm Release Lightweight Tex-to-Audio Model Optimised for Fast On-Device Generation

May 19, 2025

Mistral Unveils Medium 3: Enterprise-Ready Language Model

May 19, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.