Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • ADOPTION
  • TRADING
  • HACKING
  • SLOT
Crypto Flexs
  • DIRECTORY
  • CRYPTO
    • ETHEREUM
    • BITCOIN
    • ALTCOIN
  • BLOCKCHAIN
  • EXCHANGE
  • ADOPTION
  • TRADING
  • HACKING
  • SLOT
Crypto Flexs
Home»ADOPTION NEWS»NVIDIA Releases NCCL 2.22, Offering Improved Memory Efficiency and Faster Initialization
ADOPTION NEWS

NVIDIA Releases NCCL 2.22, Offering Improved Memory Efficiency and Faster Initialization

By Crypto FlexsSeptember 21, 20243 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
NVIDIA Releases NCCL 2.22, Offering Improved Memory Efficiency and Faster Initialization
Share
Facebook Twitter LinkedIn Pinterest Email

Caroline Bishop
21 Sep 2024 13:38

NVIDIA introduces NCCL 2.22, focusing on memory efficiency, fast initialization, and cost estimation for advanced HPC and AI applications.





The NVIDIA Collective Communications Library (NCCL) has released its latest version, NCCL 2.22, which provides significant improvements aimed at optimizing memory usage, accelerating initialization times, and introducing a cost estimation API. These updates are essential for high-performance computing (HPC) and Artificial Intelligence According to the NVIDIA technology blog, (AI) applications.

Release Highlights

NVIDIA Magnum IO NCCL is designed to optimize inter-GPU and multi-node communication essential for efficient parallel computing. Key features of the NCCL 2.22 release include:

  • Delayed connection setup: This feature allows us to significantly reduce GPU memory overhead by delaying connection creation until it is needed.
  • New API for cost estimation: New APIs help optimize compute and communication redundancy or investigate NCCL cost models.
  • For optimization ncclCommInitRank: Duplicate topology queries are eliminated, resulting in up to 90% faster initialization for applications that create multiple communicators.
  • Multi-subnet support using IB routers: Added communication support for jobs spanning multiple InfiniBand subnets, enabling large-scale DL training jobs.

Detailed features

Lazy connection settings

NCCL 2.22 introduces delayed connection setup, which significantly reduces GPU memory usage by delaying connection creation until it is actually needed. This feature is especially useful for applications with narrow usage, such as repeatedly running the same algorithm. This feature is enabled by default, but can be disabled by setting it. NCCL_RUNTIME_CONNECT=0.

New Cost Model API

New API, ncclGroupSimulateEndHelps developers estimate the time required for a task, helping them optimize computation and communication redundancy. Although the estimates may not perfectly match reality, they provide useful guidance for performance tuning.

Initialization optimization

To minimize initialization overhead, the NCCL team introduced several optimizations, including delayed connection setup and intra-node topology convergence. These improvements can reduce: ncclCommInitRank Applications that create multiple communicators will run significantly faster, with execution times reduced by up to 90%.

New tuner plugin interface

The new tuner plugin interface (v3) provides a 2D cost table per set reporting the estimated time required for the task. This allows external tuners to optimize the combination of algorithms and protocols for better performance.

Static plugin linking

For convenience and to avoid loading problems, NCCL 2.22 supports static linking of network or tuner plugins. Applications can specify this by setting: NCCL_NET_PLUGIN or NCCL_TUNER_PLUGIN to STATIC_PLUGIN.

Group semantics for interruption or destruction

NCCL 2.22 introduces group semantics. ncclCommDestroy and ncclCommAbortAllows multiple communicators to be destroyed simultaneously. This feature aims to prevent deadlocks and improve user experience.

IB Router Support

This release allows NCCL to operate across multiple InfiniBand subnets, improving communications in large networks. The library automatically detects and establishes connections between endpoints across multiple subnets using FLID for higher performance and adaptive routing.

Bug fixes and minor updates

The NCCL 2.22 release also includes several bug fixes and minor updates.

  • For support allreduce Tree algorithm on DGX Google Cloud.
  • Logging NIC names for IB asynchronous errors.
  • Improved performance of registered send and receive operations.
  • Added infrastructure code for NVIDIA Trusted Computing solutions.
  • Provides separate traffic classes for IB and RoCE control messages to support advanced QoS.
  • Supports PCI peer-to-peer communication between partitioned Broadcom PCI switches.

summation

The NCCL 2.22 release introduces several important features and optimizations to improve the performance and efficiency of HPC and AI applications. Improvements include a new tuner plugin interface, support for static linking of plugins, and improved group semantics to prevent deadlocks.

Image source: Shutterstock


Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

VEXI Village unveils new features and events in June.

June 7, 2025

HOLONYM’s Human Network: Convert on boarding on boarding on human -friendly keys

June 7, 2025

NVIDIA’s GB200 NVL72 and Dynamo improve MoE model performance

June 7, 2025
Add A Comment

Comments are closed.

Recent Posts

VEXI Village unveils new features and events in June.

June 7, 2025

Solana Whale will not announce $ 17 million in four years. Should I worry?

June 7, 2025

HOLONYM’s Human Network: Convert on boarding on boarding on human -friendly keys

June 7, 2025

The SEC gets $ 1.1m case when Crypto Schemer crosses the court.

June 7, 2025

NFT artists reproduce ‘password tax nightmares’ with new songs.

June 7, 2025

NVIDIA’s GB200 NVL72 and Dynamo improve MoE model performance

June 7, 2025

Despite market volatility

June 7, 2025

TEZOS promotes scaling efforts by activating data soluble layers.

June 7, 2025

It shows a graphite network. Tesla is nothing without trust because Tesla’s Tesla spent $ 150 billion after Musk and Trump’s fallout.

June 7, 2025

The merchant warns that Bitcoin is in ‘cancer price behavior’.

June 7, 2025

Is Bitcoin Price Rally $ 150K by the end of the year?

June 7, 2025

Crypto Flexs is a Professional Cryptocurrency News Platform. Here we will provide you only interesting content, which you will like very much. We’re dedicated to providing you the best of Cryptocurrency. We hope you enjoy our Cryptocurrency News as much as we enjoy offering them to you.

Contact Us : Partner(@)Cryptoflexs.com

Top Insights

VEXI Village unveils new features and events in June.

June 7, 2025

Solana Whale will not announce $ 17 million in four years. Should I worry?

June 7, 2025

HOLONYM’s Human Network: Convert on boarding on boarding on human -friendly keys

June 7, 2025
Most Popular

Will RoaringKitty become a GameStop billionaire by Friday? Crypto traders are betting on 50% odds

June 7, 2024

Hashdex Amends Again S-1 for Nasdaq Crypto Index US ETF.

November 26, 2024

Bitcoin ‘Late Longs’ Get Washed Out As BTC Price Drops to $65,000

July 31, 2024
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2025 Crypto Flexs

Type above and press Enter to search. Press Esc to cancel.