Enhanced support for NVIDIA DOCA Platform Framework (DPF), part of DOCA 3.0, and NVIDIA AI Enterprise brings advanced provisioning, lifecycle management, and service deployment capabilities to modern AI infrastructures.
SAN JOSE, Calif., June 10, 2025–(BUSINESS WIRE)–Spectro Cloud, a leading provider of Kubernetes management solutions, today announced the integration of NVIDIA DOCA Platform Framework (DPF), part of NVIDIA’s latest DOCA 3.0 and NVIDIA AI Enterprise software, into its Palette platform.
Building on its proven track record as a trusted partner for major organizations deploying Kubernetes in the cloud, at the data center, and at the edge, Spectro Cloud continues to expand its leadership in enabling production-ready infrastructure for AI and modern applications.
This integration empowers organizations to efficiently deploy and manage NVIDIA BlueField-3 DPUs alongside AI workloads across diverse environments, including telco, enterprise, and edge. Spectro Cloud is excited to meet, discuss, and demonstrate this integration at GTC Paris, June 11-12.
With the integration of DPF, Palette users gain access to a suite of advanced features designed to optimize data center operations:
Comprehensive provisioning and lifecycle management: Palette streamlines the deployment and management of NVIDIA BlueField-accelerated infrastructure, ensuring seamless operations across various environments.
Enhanced security service deployment: With the integration of NVIDIA DOCA Argus, customers can elevate cybersecurity capabilities, providing real-time threat detection for AI workloads. DOCA Argus operates autonomously on NVIDIA BlueField, enabling runtime threat detection, agentless operation, and seamless integration into existing enterprise security platforms.
Support for Advanced DOCA Networking Features: Palette now supports deployment of DOCA FLOW features, including ACL pipe, LPM pipe, CT pipe, ordered list pipe, external send queue (SQ), and pipe resize, enabling more granular control over data traffic and improved network efficiency.
NVIDIA AI Enterprise-ready deployments with Palette
Palette now supports NVIDIA AI Enterprise-ready deployments, streamlining how organizations operationalize AI across their infrastructure stack. With deep integration of NVIDIA AI Enterprise software components, Palette provides a turnkey experience to provision, manage, and scale AI workloads, including:
NVIDIA GPU Operator
Automates the provisioning, health monitoring, and lifecycle management of GPU resources in Kubernetes environments, reducing the operational burden of running GPU-intensive AI/ML workloads.
NVIDIA Network Operator
Delivers accelerated network performance using DOCA infrastructure. It enables low-latency, high-throughput communication critical for distributed AI inference and training workloads.
NVIDIA NIM Microservices
Palette simplifies the deployment of NVIDIA NIM microservices, a new class of optimized, containerized inference APIs that allow organizations to instantly serve popular foundation models, including LLMs, vision models, and ASR pipelines. With Palette, users can launch NIM endpoints on GPU-accelerated infrastructure with policy-based governance, lifecycle management, and integration into CI/CD pipelines — enabling rapid experimentation and production scaling of AI applications.
NVIDIA NeMo
With Palette’s industry-leading declarative management, platform teams can easily define reusable cluster configurations that includes everything from NVIDIA NeMo microservices to build, customize, evaluate and guardrail LLMs; to GPU drivers and NVIDIA CUDA libraries; to the NVIDIA Dynamo Inference framework; plus PyTorch/TensorFlow, and Helm chart deployments. This approach enables a scalable, repeatable, and operationally efficient foundation for AI workloads.
Story Continues