Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

MIT CSAIL Director Daniela Rus Presents New Self-Driving Models

Pittsburgh weekly roundup: Axios-OpenAI partnership; Buttigieg visits CMU; AI ‘employees’ in the nonprofit industry

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » 13 AI-Focused Storage Offerings On Display At Nvidia GTC 2025
NVIDIA AI

13 AI-Focused Storage Offerings On Display At Nvidia GTC 2025

Advanced AI BotBy Advanced AI BotMay 30, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


While some of the top storage vendors pledged support for the new Nvidia AI Data Platform reference design for AI infrastructure and AI inference, others introduce a wide range of hardware and software aimed at expanding the overall AI ecosystem.

While the Nvidia GTC 2025 event this week is centered on what Nvidia is doing to build an AI ecosystem, including the introduction of its next-generation Blackwell Ultra GPU for AI data centers, it is also an opportunity for others in the industry to showcase what they bring to that ecosystem.

One key part of building AI infrastructures is the storage industry. The GPUs needed to process AI training and inference are data-hungry, requiring not only ever more capacity to store that data but also performance to match their requirements.

Therefore, it is no surprise that storage has become front and center at GTC 2025.

[Related: DDN CEO On The Company’s AI Mission And The ‘Essential Role’ Partners Play]

Nvidia itself made storage news at the conference with the Tuesday introduction of the Nvidia AI Data Platform, a customizable reference design that many storage developers are using to build what Nvidia calls a “new class” of AI infrastructure for AI inference workloads.

Leading storage vendors including DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, Vast Data, and Weka, are working with Nvidia to build infrastructures to speed AI workloads with specialized AI query agents to help businesses generate insights from data in near real time working with Nvidia Blackwell GPUs, Nvidia BlueField DPUs, and other technologies.

Other storage vendors are also at the conference showing a wide range of hardware systems and components as well as software aimed at helping businesses take advantage of AI.

CRN here presents information on the latest storage offerings from 13 vendors looking to help build high-capacity and/or high-performance systems aimed at making sure the data pipeline to AI moves as quickly as needed.

Also look for CRN to examine how vendors outside the storage industry are helping build other parts of the AI infrastructure.

Graid Technology SupremeRAID AE (AI Edition)

Santa Clara, Calif.-based Graid used Nvidia GTC 2025 to introduce its SupremeRAID AE (AI Edition), which was designed to help enterprises running GPU servers and AI workloads deliver optimized data management with GPUDirect Storage (GDS) and an Intelligent Data Offload Engine. By enabling direct NVMe-to-GPU transfers, it helps eliminate bottlenecks and reduces latency for faster model training and inference. Its offload engine optimizes GPU utilization by handling data tasks, freeing GPU resources for AI processing. SupremeRAID AE supports NVMe-oF for scalable AI storage, and provides enterprise-grade RAID protection to help ensure uninterrupted access to critical datasets. Its flexible GPU deployment model allows enterprises to scale workloads while maximizing AI infrastructure performance.

Kioxia 122.88TB NVMe SSD

San Jose, Calif.-based Kioxia’s new enterprise-class Kioxia LC9 SSD is aimed at GenAI applications. The SSD is optimized for capacity, and includes a PCIe 5.0 interface and dual-port capability for fault tolerance or connectivity to multiple compute systems. It can help accelerate AI model training, inference, and Retrieval-Augmented Generation (RAG) at scale. Kioxia claims the LC9 drives are the first enterprise QLC SSDs to use 2Tb QLC BiCS FLASH 3D flash memory 8th generation technology.

Pure Storage FlashBlade//EXA

Pure Storage FlashBlade//EXA is a new data storage platform aimed at meeting the requirements of AI and high performance computing (HPC). The new platform, from Santa Clara, Calif.-based Pure storage, helps provide multidimensional performance with massively parallel processing and scalable metadata IOPS to support high-speed AI requirements, with performance of 10-plus terabytes per second in a single namespace. The platform also helps eliminate metadata bottlenecks with high metadata performance, availability, and resiliency for massive AI datasets with no manual tuning or additional configuration needed. Its configurable and disaggregated architecture uses industry standard protocols, including Nvidia ConnectX NICs, Spectrum switches, LinkX cables, and accelerated communications libraries. FlashBlade//EXA is slated to be available in summer 2025.

Cohesity Gaia On-Premises

Cohesity is expanding Cohesity Gaia, the enterprise knowledge discovery assistant, to deliver what the Santa Clara, Calif.-based company calls the industry’s first AI search capabilities for data stored on-premises. The new Cohesity Gaia will be available for use with Cisco UCS, HPE, and Nutanix platforms. The expansion will enable enterprises to uncover AI-powered insights by unlocking the full potential of their on-premises backup data. For enterprises adopting hybrid cloud strategies and looking to keep their valuable, critical data on-premises to meet security, compliance, and performance requirements, the expanded Cohesity Gaia can help them access more high-quality data and remain in control of their infrastructure. Availability is slated for mid-2025.

IBM Content Aware Storage In Fusion

IBM’s new content-aware storage (CAS) capability helps enterprises extract the meaning hidden in their rapidly growing volumes of unstructured data for inferencing, without compromising trust and safety, to responsibly scale and enhance AI applications like retrieval-augmented generation (RAG) and AI reasoning. IBM Storage Scale responds to queries using the extracted and augmented data to help speed communications between GPUs and storage using Nvidia BlueField-3 DPUs and Nvidia Spectrum-X networking. The multimodal document data extraction workflow also leverages Nvidia NeMo Retriever microservices, built with Nvidia NIM. Availability is slated for the second quarter of 2025.

DDN Infinia Data Ocean

DDN, Chatsworth, Calif., used Nvidia GTC 2025 to introduce its Infinia Data Ocean, a platform aimed at unifying data across edge, core, and cloud environments. The platform aims to simplify AI data management by reducing silos and improving data mobility. It provides tools for real-time data processing, multi-tenant security, and seamless AI application integration. Key features include automated AI data pipelines, low-latency architecture, and cloud-native scalability. The platform is designed to support enterprises managing complex AI workloads, offering security, efficiency, and fault-tolerant data storage. Infinia Data Ocean targets the increasing demands of AI infrastructure across various industries.

Hitachi iQ M Series

Santa Clara, Calif.-based Hitachi Vantara introduced the Hitachi iQ M Series, the latest addition to the Hitachi iQ portfolio of AI-ready infrastructure solutions. This new offering lowers the entry cost for AI, with built-in adaptability and scalability as customer needs evolve. Integrating accelerated computing platforms with robust networking, Hitachi iQ M Series combines Hitachi Vantara Virtual Storage Platform One (VSP One) storage, integrated file system choices, and optional Nvidia AI Enterprise software into a scalable and adaptable AI infrastructure package. The Hitachi iQ M Series allows for a choice of Nvidia accelerating computing platforms to select the most suitable GPU for specific workloads, and allows for the scaling of compute and storage independently with the flexibility to adapt to diverse and fluctuating data sizes, data types, and workloads.

Pliops/vLLM Production Stack Collaboration

Santa Clara, Calif.-based Pliops and the open source vLLM Production Stack project have collaborated on technology to deliver high performance and efficiency for LLM inference. Pliops brings expertise in shared storage and efficient vLLM cache offloading, while LMCache Lab brings its scalability framework for multiple instance execution. The combined offering leverages Pliops’ KV (key-value) storage backend to increase performance and scalability in AI applications. Pliops’ XDP LightningAI, an accelerated KV distributed smart node, introduces a new petabyte tier of memory below HBM for GPU compute applications, and connects to GPU servers by leveraging the NVMe-oF storage ecosystem to provide a distributed KV service.

MinIO AIStor + NVIDIA

To further support the growing demands of modern AI workloads, Redwood City, Calif.-based MinIO is adding advancements to its AIStor AWS S3-compatible object store technology that deepen its support for the Nvidia AI ecosystem. The new integrations include support for Nvidia GPUDirect Storage (GDS) for object storage to deliver a significant increase in CPU efficiency on the Nvidia GPU server, native integration with Nvidia BlueField-3 networking platform to help drive down object storage total cost of ownership, and incorporation of Nvidia NIM microservices into MinIO’s AIStor promptObject inference to help simplify deployment and management of inference infrastructure and deliver faster inference via model optimizations for Nvidia hardware.

NetApp AFF A90 Validated for NVIDIA DGX SuperPOD

The new NetApp AFF A90 enterprise storage systems with NetApp ONTAP from San Jose, Calif.-based NetApp are now validated for Nvidia DGX SuperPOD. NetApp, by bringing what it terms the most secure storage on the planet to DGX SuperPOD deployments, gives customers the enterprise data management capabilities and scalable multi-tenancy they need to develop and operate high-performance AI factories and deploy agentic AI while eliminating data silos.

Vast InsightEngine with Nvidia DGX

New York-based Vast Data used Nvidia GTC 2025 to introduce the Vast InsightEngine with Nvidia DGX, a new converged system aimed at bringing enterprise customers a real-time AI data processing and retrieval platform designed to simplify AI deployments while delivering fast, scalable, and secure data services. Vast InsightEngine with Nvidia DGX converges instant automated data ingestion, exabyte-scale vector search, event-driven orchestration, and GPU-optimized inferencing into a single system with unified global enterprise-grade security to help businesses efficiently accelerate AI adoption and time-to-value by unlocking AI-driven insights. It is a pre-configured, fully integrated real-time AI stack to help ensure seamless data flow and scalable AI inferencing at exabyte scale.

Solidigm Liquid-Cooled Enterprise SSDs

Rancho Cordova, Calif.-based SSD manufacturer Solidigm unveiled one of the world’s first liquid-cooled enterprise SSDs (eSSDs) at Nvidia GTC 2025, designed to eliminate fans and enable fully liquid-cooled AI servers. Traditional SSD cooling has limited server design flexibility, but Solidigm’s cold-plate cooled eSSDs—developed with Nvidia—overcomes challenges like hot swap-ability and single-side cooling constraints. The technology features the D7-PS1010 E1.S 9.5mm form factor, which improves thermal efficiency and data center serviceability. The offering will be available in the second half of 2025. The company also said a 15mm air-cooled version is also launching for broader server compatibility.

Vdura V11 Data Platform and V500 All-Flash Appliance

The V5000 all-flash appliance from Milpitas, Calif.-based Vdura was engineered to address organizations’ increasing demands as AI pipelines and generative AI models move into production. The all-flash appliance is built on Vdura’s V11 data platform, and delivers GPU-saturating throughput while ensuring the durability and availability of data for 24x7x365 operating conditions. The update is aimed at helping organizations be confident that their AI infrastructures are scalable and reliable.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMeta Loses Majority of Original Llama AI Team to Competitors
Next Article Baidu AI patent application reveals plans for turning animal sounds into words
Advanced AI Bot
  • Website

Related Posts

Amazon Expands Global Data Centers, Boosts Access To Nvidia AI Chips For Cloud Customers – NVIDIA (NASDAQ:NVDA), Amazon.com (NASDAQ:AMZN)

May 31, 2025

Amazon Expands Global Data Centers, Boosts Access To Nvidia AI Chips For Cloud Customers – NVIDIA (NASDAQ:NVDA), Amazon.com (NASDAQ:AMZN)

May 31, 2025

Amazon Expands Global Data Centers, Boosts Access To Nvidia AI Chips For Cloud Customers – NVIDIA (NASDAQ:NVDA), Amazon.com (NASDAQ:AMZN)

May 31, 2025
Leave A Reply Cancel Reply

Latest Posts

Paley Museum In NY Celebrates Six-Season Run Of ‘The Handmaid’s Tale’

Tessa Hulls On The Weight Of History, The Power Of Comics, And Winning A Pulitzer Prize

New Las Vegas Exhibit Displays Five Cirque Du Soleil Shows’ Costumes

Trump Fires National Portrait Gallery Director Kim Sajet

Latest Posts

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

May 31, 2025

MIT CSAIL Director Daniela Rus Presents New Self-Driving Models

May 31, 2025

Pittsburgh weekly roundup: Axios-OpenAI partnership; Buttigieg visits CMU; AI ‘employees’ in the nonprofit industry

May 31, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.