The NVIDIA Collective Communications Library (NCCL) has released its latest version, NCCL 2.22, which provides significant improvements aimed at optimizing memory usage, accelerating initialization times, and introducing a cost estimation API. These updates are essential for high-performance computing (HPC) and Artificial Intelligence According to the NVIDIA technology blog, (AI) applications.
Release Highlights
NVIDIA Magnum IO NCCL is designed to optimize inter-GPU and multi-node communication essential for efficient parallel computing. Key features of the NCCL 2.22 release include:
- Delayed connection setup: This feature allows us to significantly reduce GPU memory overhead by delaying connection creation until it is needed.
- New API for cost estimation: New APIs help optimize compute and communication redundancy or investigate NCCL cost models.
- For optimization
ncclCommInitRank
: Duplicate topology queries are eliminated, resulting in up to 90% faster initialization for applications that create multiple communicators. - Multi-subnet support using IB routers: Added communication support for jobs spanning multiple InfiniBand subnets, enabling large-scale DL training jobs.
Detailed features
Lazy connection settings
NCCL 2.22 introduces delayed connection setup, which significantly reduces GPU memory usage by delaying connection creation until it is actually needed. This feature is especially useful for applications with narrow usage, such as repeatedly running the same algorithm. This feature is enabled by default, but can be disabled by setting it. NCCL_RUNTIME_CONNECT=0
.
New Cost Model API
New API, ncclGroupSimulateEnd
Helps developers estimate the time required for a task, helping them optimize computation and communication redundancy. Although the estimates may not perfectly match reality, they provide useful guidance for performance tuning.
Initialization optimization
To minimize initialization overhead, the NCCL team introduced several optimizations, including delayed connection setup and intra-node topology convergence. These improvements can reduce: ncclCommInitRank
Applications that create multiple communicators will run significantly faster, with execution times reduced by up to 90%.
New tuner plugin interface
The new tuner plugin interface (v3) provides a 2D cost table per set reporting the estimated time required for the task. This allows external tuners to optimize the combination of algorithms and protocols for better performance.
Static plugin linking
For convenience and to avoid loading problems, NCCL 2.22 supports static linking of network or tuner plugins. Applications can specify this by setting: NCCL_NET_PLUGIN
or NCCL_TUNER_PLUGIN
to STATIC_PLUGIN
.
Group semantics for interruption or destruction
NCCL 2.22 introduces group semantics. ncclCommDestroy
and ncclCommAbort
Allows multiple communicators to be destroyed simultaneously. This feature aims to prevent deadlocks and improve user experience.
IB Router Support
This release allows NCCL to operate across multiple InfiniBand subnets, improving communications in large networks. The library automatically detects and establishes connections between endpoints across multiple subnets using FLID for higher performance and adaptive routing.
Bug fixes and minor updates
The NCCL 2.22 release also includes several bug fixes and minor updates.
- For support
allreduce
Tree algorithm on DGX Google Cloud. - Logging NIC names for IB asynchronous errors.
- Improved performance of registered send and receive operations.
- Added infrastructure code for NVIDIA Trusted Computing solutions.
- Provides separate traffic classes for IB and RoCE control messages to support advanced QoS.
- Supports PCI peer-to-peer communication between partitioned Broadcom PCI switches.
summation
The NCCL 2.22 release introduces several important features and optimizations to improve the performance and efficiency of HPC and AI applications. Improvements include a new tuner plugin interface, support for static linking of plugins, and improved group semantics to prevent deadlocks.
Image source: Shutterstock