Data curation is critical to developing effective and fair large-scale language models (LLMs). High-quality and diverse training data directly impacts LLM performance by addressing issues such as bias, inconsistency, and redundancy. NVIDIA recently announced the open-source release of NVIDIA NeMo Curator, a data curation library designed to improve LLM training accuracy through scalable and efficient data set preparation.
The Importance of Data Curation
Especially for low-resource languages, web crawling data such as OSCAR is essential for training localized multilingual LLMs. However, this data often contains noise, irrelevant content, duplication, and formatting issues. Effective data curation is essential to alleviate these issues and ensure high-quality LLM performance. NeMo Curator provides a customizable and modular interface that prepares high-quality tokens to simplify pipeline expansion and accelerate model convergence.
NeMo Curator Overview
NeMo Curator leverages GPU-accelerated data curation using Dask and RAPIDS to enable users to mine high-quality text at scale from large uncurated web corpora and custom datasets. For example, a data curation pipeline can be constructed using the Thai Wikipedia dataset, a smaller subset of the Wikipedia dataset that can be processed on a single GPU. Wikipedia is considered high-quality for LLM pre-training due to its accurate and well-structured content. NeMo Curator improves on this by detecting and filtering out low-quality documents, ensuring that only the best data is used for training.
Example data curation pipeline
Taking the Thai Wikipedia as an example, the data curation pipeline involves several steps.
- Download the dataset and extract it as a JSONL file.
- Performs preliminary data cleaning, including language separation and Unicode text correction.
- Provides advanced cleaning capabilities such as GPU-accelerated accurate and fuzzy deduplication, and heuristic filtering.
The full code sample for this tutorial can be found in the NVIDIA NeMo Curator GitHub repository.
Prerequisites and setup
The following hardware setup is recommended to use GPU-accelerated deduplication:
- NVIDIA GPU: This tutorial uses the NVIDIA A10 24GB GPU.
- CUDA and NVIDIA Drivers: CUDA 12.2 with driver 535.154.05.
- Ubuntu 22.04.
- NVIDIA-Container-Toolkit version 1.14.6.
To install the NeMo Curator library, run the following command:
git clone https://github.com/NVIDIA/NeMo-Curator.git cd NeMo-Curator pip install --extra-index-url https://pypi.nvidia.com "(cuda12x)"
Advanced Data Cleaning
It provides better data quality by applying advanced data curation techniques such as deduplication and heuristic filtering. For example, the ExactDuplicates class removes identical documents using a GPU-accelerated implementation of the RAPIDS cuDF library. Similarly, the FuzzyDuplicates class removes nearly identical documents using the computationally efficient MinhashLSH algorithm.
Heuristic filtering
Heuristic filtering helps remove low-quality content from a dataset using simple, easy-to-compute rules. NeMo Curator provides 24 heuristics for natural language and 8 heuristics for coding languages at the time of publication. These filters can be applied by defining filters for heuristic filtering using a YAML configuration file.
next stage
This tutorial demonstrated how to set up a sample data curation pipeline for Thai Wikipedia data. For more details and examples, see the data curation examples collection on GitHub. Companies can also request access to the NVIDIA NeMo Curator microservice, which offers streamlined performance and Scalability.
Image source: Shutterstock