IBM Research has announced significant advances in the PyTorch framework to improve the efficiency of AI model training. These improvements were announced at the PyTorch Conference, highlighting a new data loader that can handle massive amounts of data and significant improvements in throughput for large-scale language model (LLM) training.
Improved data loader in PyTorch
A new high-throughput data loader allows PyTorch users to seamlessly distribute their LLM training workloads across multiple machines. This innovation allows developers to save checkpoints more efficiently, reducing redundant work. According to IBM Research, the tool was developed out of necessity by Davis Wertheimer and his colleagues, who needed a solution to efficiently manage and stream large amounts of data across multiple devices.
Initially, the team faced the problem that the existing data loader was causing a bottleneck in the training process. They iterated and improved the approach, creating a PyTorch native data loader that supports dynamic and adaptive operations. This tool ensures that previously seen data is not revisited even if resource allocation changes in the middle of a job.
In stress tests, the data loader streamed 2 trillion tokens without errors while running continuously for a month. It demonstrated the ability to load over 90,000 tokens per second per worker, which is equivalent to loading 500 billion tokens per day on 64 GPUs.
Maximize training throughput
Another important focus for IBM Research is optimizing GPU usage to avoid bottlenecks in AI model training. The team used Fully Sharded Data Parallel (FSDP) technology to evenly distribute large training datasets across multiple machines, improving the efficiency and speed of model training and tuning. Using FSDP with torch.compile significantly improved throughput.
IBM Research scientist Linsong Chu highlighted that his team was one of the first to train a model using torch.compile and FSDP, achieving a training speed of 4,550 tokens per second per GPU on an A100 GPU. This breakthrough was recently demonstrated with the Granite 7B model released on Red Hat Enterprise Linux AI (RHEL AI).
Additional optimizations are being explored, including the integration of the FP8 (8-point floating-point) data type supported by the Nvidia H100 GPU, which can increase throughput by up to 50 percent. IBM Research scientist Raghu Ganti highlighted the significant impact of these improvements on reducing infrastructure costs.
Future outlook
IBM Research continues to explore new areas, including using FP8 for model training and tuning IBM’s Artificial Intelligence Unit (AIU). The team is also focusing on Triton, Nvidia’s open source software for AI deployment and execution, which aims to further optimize training by compiling Python code into hardware-specific programming languages.
These advances aim to move faster cloud-based model training from experimental to broader community applications, potentially transforming the AI model training landscape.
Image source: Shutterstock