NVIDIA has unveiled new features in RAPIDS cuDF that significantly improve the performance of the pandas library when processing large text-intensive datasets. According to the NVIDIA Technical Blog, these improvements will allow data scientists to accelerate their workloads by up to 30x.
RAPIDS cuDF and Panda
RAPIDS is a collection of open source GPU-accelerated data science and AI libraries, and cuDF is a Python GPU DataFrame library designed for loading, combining, aggregating, and filtering data. pandas, a widely used data analysis and manipulation library in Python, has struggled with processing speed and efficiency as dataset sizes have grown, especially on CPU-only systems.
At GTC 2024, NVIDIA announced that RAPIDS cuDF can accelerate pandas by about 150x without any code changes. Google later announced that RAPIDS cuDF will be natively available in Google Colab, making it easier for data scientists to use.
Pushing the limits
User feedback on the initial release of cuDF highlighted some limitations, particularly with regard to the size and type of datasets that could benefit from acceleration.
- To maximize acceleration, datasets must fit into GPU memory, which limits the data size and complexity of operations that can be performed.
- Text-heavy data sets face limitations, with the original cuDF release only supporting a maximum of 2.1 billion characters per column.
To address these issues, the latest release of RAPIDS cuDF includes:
- Up to 30x speedup on larger data sets and more complex workloads with optimized CUDA unified memory.
- The number of characters in a column has been expanded from 2.1 billion to 2.1 billion rows of tabular text.
Accelerated data processing through unified memory
cuDF relies on CPU fallback to ensure a smooth experience. If memory requirements exceed GPU capacity, cuDF transfers data to CPU memory and uses pandas for processing. However, to avoid frequent CPU fallbacks, the data set should ideally fit into GPU memory.
With CUDA Unified Memory, cuDF can now scale pandas workloads beyond GPU memory. Unified Memory provides a single address space across CPUs and GPUs, enabling virtual memory allocations larger than the available GPU memory and migrating data as needed. This helps maximize performance, but datasets still need to be sized to fit GPU memory for maximum acceleration.
Benchmarks show that using cuDF for data joins on a 10GB dataset using a 16GB memory GPU can achieve up to 30x speedup compared to CPU-only pandas. This is a significant improvement, especially when handling datasets larger than 4GB, which previously faced performance issues due to GPU memory constraints.
Processing large-scale tabular text data
The original cuDF release’s 2.1 billion character per column limit presented challenges for large datasets. With the new release, cuDF can now handle tabular text data of up to 2.1 billion rows, making pandas a viable tool for data preparation in generative AI pipelines.
These improvements will make Pandas code run much faster, especially for text-heavy datasets like product reviews, customer service logs, or datasets with significant location or user ID data.
Get started
All of these features are available in RAPIDS 24.08 and can be downloaded from the RAPIDS Installation Guide. The Unified Memory feature is only supported on Linux-based systems.
Image source: Shutterstock