NVIDIA has unveiled TensorRT-LLM MultiShot, a new protocol designed to improve the efficiency of multi-GPU communication, especially for generative AI workloads in production environments. According to NVIDIA, this innovation leverages NVLink switch technology to significantly increase communication speeds by up to 3x.
Challenges of existing AllReduce
Low-latency inference is critical in AI applications and often requires multi-GPU setups. However, the existing AllReduce algorithm, which is essential for GPU computation synchronization, may be inefficient as it involves multiple data exchange steps. Traditional ring-based approaches require 2N-2 steps. Here N is the number of GPUs, which increases latency and synchronization issues.
TensorRT-LLM multishot solution
TensorRT-LLM MultiShot solves these problems by reducing the latency of AllReduce operations. This leverages the multicast capabilities of NVSwitch to allow a GPU to send data to all other GPUs simultaneously with minimal communication steps. This results in only two synchronization steps being performed regardless of the number of GPUs involved, significantly increasing efficiency.
The process is divided into ReduceScatter tasks and AllGather tasks. Each GPU accumulates part of the resulting tensor and then broadcasts the accumulated result to all other GPUs. This method reduces per-GPU bandwidth and improves overall throughput.
Implications for AI Performance
Introducing TensorRT-LLM MultiShot can achieve nearly 3x speedup over existing methods, especially useful for scenarios that require low latency and high parallelism. These advancements allow for reduced latency or increased throughput at a given latency, potentially enabling ultra-linear scaling using more GPUs.
NVIDIA emphasizes the importance of understanding workload bottlenecks to optimize performance. The company is working closely with developers and researchers to implement new optimizations, with the goal of continuously improving the performance of the platform.
Image source: Shutterstock