Peter Jang
February 24, 2025 15:55
NVIDIA AI Enterprise now supports the H200 NVL GPU, improving AI infrastructure with performance and efficiency improvement. Updates include new software components for accelerated AI workloads.
NVIDIA has announced significant updates for the NVIDIA AI Enterprise platform and has now integrated support for the NVIDIA H200 NVL GPU. This development is part of the latest release of the company’s infrastructure software to improve the enterprise level AI application. According to NVIDIA, the newly added H200 NVL promises to provide state -of -the -art features for agents and generating AIs.
NVIDIA AI Enterprise Platform
The NVIDIA AI Enterprise platform is designed to promote the development and distribution of the production grade AI solution. It consists of a comprehensive software component product that can be distributed to various hardware settings, including servers, edge systems and workstations. The platform is divided into two main categories: AI and Data Science Software Catalog and infrastructure Software Collection.
AI and data science software catalogs include multiple frameworks for NVIDIA NIM microservices and AI workflow construction. These components are container for smooth cloud native distribution to ensure compatibility with various cloud service providers.
Infrastructure software collection
Infrastructure software collection provides essential components for supporting AI and data science workloads of accelerated systems. This includes drivers, networking and virtualization and Kubernetes operators for GPUs. You can also use Base Command Manager Essentials for efficient cluster management.
With the latest update, the infrastructure software collection is now expected to support the H200 NVL GPU and greatly improve AI application performance and energy efficiency.
H200 NVL GPU improvement
The H200 NVL GPU, released in the SuperComputing 2024 Conference, is designed for a data center that requires low power air air -type enterprise rack design. It provides a flexible configuration to accelerate a wide range of AI workloads. The GPU boasts a 1.5 -fold increase in memory and a 1.2x bandwidth, while the NVIDIA H100 NVL offers up to 1.7 times faster reasoning performance.
NVIDIA AI Enterprise’s H200 NVL support is being released in stages. Version 6.0 of the current infrastructure collection supports bare metal applications and virtualization using the GPU pass. Later, the expected version 6.1 will add virtualization support with VGPU.
Reference architecture and availability
NVIDIA also introduced a reference architecture to simplify the distribution and configuration of the AI system. This architecture provides a flexible infrastructure stack for the original equipment manufacturer (OEM) and partners to ensure consistent software components and adaptive hardware configuration.
For corporate purchasing servers with H200 NVL, NVIDIA AI Enterprise can be accessed immediately and will be completed for five years. NVIDIA also provides some starting methods, including free access to NIM micro services for testing and 90 -day free evaluation licenses. NVIDIA AI Enterprise Infrastructure Collection 6.0 can be downloaded from NVIDIA NGC Catalog.
Image Source: Shutter Stock