Parquet writer offers a variety of encoding and compression options that are turned off by default. Enabling these options can provide better lossless compression for your data, but it is important to understand which options to use for optimal performance, according to the NVIDIA Tech Blog.
Understanding Parquet Encoding and Compression
Parquet’s encoding phase reorganizes the data to reduce its size while preserving access to each data point. The compression phase further reduces the total size in bytes, but requires decompression before the data can be accessed again. The Parquet format includes two delta encodings designed to optimize the storage of string data: DELTA_LENGTH_BYTE_ARRAY (DLBA) and DELTA_BYTE_ARRAY (DBA).
RAPIDS libcudf and cudf.pandas
RAPIDS is a collection of open source accelerated data science libraries. In this context, libcudf is a CUDA C++ library for thermal data processing. It supports GPU-accelerated readers, writers, relational algebra functions, and columnar transformations. The Python cudf.pandas library accelerates existing pandas code by up to 150x.
Benchmarking using Kaggle String Data
We compared encoding and compression methods using a dataset of 149 string columns containing 12 billion total characters and a total file size of 4.6 GB. The study found that the encoding size difference between libcudf and arrow-cpp was less than 1%, and that file size increased by 3-8% when using the ZSTD implementation in nvCOMP 3.0.6 compared to libzstd 1.4.8+dfsg-3build1.
String encoding in Parquet
Parquet string data is represented using a byte array physical type. Most writers default to RLE_DICTIONARY encoding for string data, which uses dictionary pages to map string values to integers. When dictionary pages become too large, writers fall back to PLAIN encoding.
Total file size based on encoding and compression
For the 149 string columns in the dataset, the default settings for dictionary encoding and SNAPPY compression produce a total file size of 4.6 GB. ZSTD compression outperforms SNAPPY, and both outperform the uncompressed option. The best single setting for the dataset is the default ZSTD, with delta encoding available for additional reduction under certain conditions.
If you choose delta encoding:
Delta encoding is useful for data with high cardinality or long string lengths, and typically achieves smaller file sizes. For string columns with less than 50 characters, DBA encoding can provide significant file size reductions, especially for sorted or semi-sorted data.
Achievements of readers and writers
The GPU-accelerated cudf.pandas library showed impressive performance compared to pandas, with Parquet reads being 17-25x faster. Using cudf.pandas with an RMM pool further improved throughput to 552 MB/s read and 263 MB/s write.
conclusion
RAPIDS libcudf provides flexible GPU-accelerated tools for reading and writing columnar data in formats such as Parquet, ORC, JSON, and CSV. For those looking to leverage GPU acceleration for Parquet processing, RAPIDS cudf.pandas and libcudf offer significant performance benefits.
Image source: Shutterstock