In the first part of the series, we provide an overview of the IVF-PQ algorithm, the basis for the IVF-Flat algorithm, and the use of Product Quantization (PQ) to compress the exponent and support larger data sets. In the second part, we shift our focus to the practical aspects of IVF-PQ performance tuning, which is especially important for achieving optimal results when dealing with billion-size data sets.
Tuning parameters for index building
IVF-PQ shares some parameters with IVF-Flat, such as coarse-level indexing and search hyperparameters. However, IVF-PQ introduces additional parameters that control the compression. One of the important parameters is n_lists
Determines the number of partitions (inverted lists) into which the input data set is clustered. Performance is affected by the number and size of the lists examined. Experimental results n_lists
It performs well at all recall levels within the range of 10K to 50K, which may vary depending on the dataset.
Another important parameter is: pq_dim
Controls compression. A good technique for tuning this parameter is to start with 1/4 the number of features in the dataset and gradually increase them. Figure 2 in the original blog post shows a significant decrease in QPS, which can be attributed to factors such as increased compute operations per CUDA block and shared memory requirements.
that much pq_bits
The parameter ranges from 4 to 8 and controls the number of bits used in each individual PQ code, affecting codebook size and recall. pq_bits
Fitting a lookup table (LUT) into shared memory can improve search speed, but incurs a retrieval cost.
Additional parameters
that much codebook_kind
The parameters determine how the codebook of the second quantizer is constructed for each subspace or each cluster. The choice between these options can affect training time, GPU shared memory usage, and recall. The following parameters kmeans_n_iters
and kmeans_trainset_fraction
This is also important, but rarely needs to be adjusted.
Tuning parameters for search
that much n_probes
The parameters discussed in the previous IVF-Flat blog post are essential for detection accuracy and throughput. IVF-PQ provides the following additional parameters: internal_distance_dtype
and lut_dtype
The data type used to control the distance or similarity representation during the search and to store the LUTs respectively. Adjusting these parameters can have a significant impact on performance, especially for large-dimensional data sets.
Improving recall through refinement
When tuning parameters are not enough to achieve the desired recall, refinement offers a promising alternative. This separate operation, performed after ANN search, recalculates the exact distances for the selected candidates and reranks them. The refinement operation can significantly improve recall, as shown in Figure 4 of the original blog post, but requires access to the source data set.
summary
The series on accelerating vector searches using inverted file indexes covers two cuVS algorithms, IVF-Flat and IVF-PQ. IVF-PQ extends IVF-Flat with PQ compression to enable faster searches and handles multi-billion datasets with limited GPU memory. By fine-tuning the parameters for index building and searching, data practitioners can efficiently achieve the best results. The RAPIDS cuVS library provides a variety of vector search algorithms for a variety of use cases, from exact searches to low-precision, high-QPS ANN methods.
For practical tuning of IVF-PQ parameters, see the IVF-PQ notebook on GitHub. For more information on the provided API, see the cuVS documentation.
Image source: Shutterstock