According to IBM Research, IBM Research has unveiled a groundbreaking innovation aimed at expanding the data processing pipeline for enterprise AI training. The advancement is designed to leverage the abundant capacity of CPUs to accelerate the creation of powerful AI models such as IBM’s Granite models.
Optimizing data preparation
Before training an AI model, a large amount of data needs to be prepared. This data often comes from various sources such as websites, PDFs, and news articles, and must go through several preprocessing steps. These steps include filtering out irrelevant HTML code, removing duplicates, and screening for abusive content. These tasks are important, but they are not limited by the availability of GPUs.
Petros Zerfos, principal research scientist for IBM Research’s Watsonx data engineering, emphasized the importance of efficient data processing. “A lot of the time and effort that goes into training these models is spent preparing the data for those models,” Zerfos said. His team has been drawing on expertise from a variety of domains, including natural language processing, distributed computing, and storage systems, to develop ways to improve the efficiency of the data processing pipeline.
CPU capacity utilization
Many steps in the data processing pipeline involve “embarrassingly parallel” computations, where each document can be processed independently. This parallelism allows the work to be distributed across multiple CPUs, which can significantly speed up data preparation. However, some steps, such as removing duplicate documents, require access to the entire data set, which cannot be done in parallel.
To accelerate IBM’s Granite model development, the team developed a process to rapidly provision and utilize tens of thousands of CPUs. This approach involved marshalling idle CPU capacity across IBM’s Cloud data center network to ensure high communication bandwidth between CPUs and data storage. Traditional object storage systems are often underperforming, leaving CPUs idle, so the team used IBM’s high-performance Storage Scale file system to efficiently cache active data.
AI Training Scaling
Last year, IBM scaled up to 100,000 vCPUs on IBM Cloud to process 14 petabytes of raw data, generating 40 trillion tokens for AI model training. The team automated these data pipelines using Kubeflow on IBM Cloud. Their method proved to be 24x faster than previous techniques with Common Crawl data processing.
All of IBM’s open source Granite code and language models are trained using data prepared through these optimized pipelines. IBM has also made a significant contribution to the AI community by developing the Data Prep Kit, a toolkit hosted on GitHub that simplifies data preparation for large-scale language model applications, supporting pretraining, fine-tuning, and augmented search generation (RAG) use cases. Built on distributed processing frameworks such as Spark and Ray, the kit allows developers to build scalable custom modules.
For more information, visit the official IBM Research blog.
Image source: Shutterstock