Jeff Johnson f39e0c1bd1 Fix GpuIndexFlat float16 memory bloat
Summary:
D37777979 included a change in order to allow usage of CPU index types for the IVF coarse quantizer. For residual computation, the centroids of the coarse quantizer IVF cells needed to be on the GPU in float32. If a GPUIndexFlat is used as the coarse quantizer for an IVF index, and if that index was float16, or if a CPU index was used as a coarse quantizer, a shadow copy of the centroids was made in float32 for IVF usage.

However, this shadow copy is only needed if a GPU float16 flat index is used as an IVF coarse quantizer. Previously we were always duplicating this data whether a GpuIndexFlat was used in an IVF index or not.

This diff restricts the construction of the shadow float32 data to only cases where we are using the GpuIndexFlat in an IVF index. Otherwise, the GpuIndexFlat, if float16, will only retain float16 data.

This should prevent the problem with memory bloat with massive float16 flat indexes.

Ideally the shadow float32 values for GPU coarse indices shouldn't be needed as all, but this will require updating the IVFPQ code to allow usage of float16 IVF centroids. This is something I will pursue in a less time-limited diff.

This diff also changes the GpuIndexFlat reconstruct methods to use kernels explicitly designed for operating on float16 and float32 data as needed, rather than having access to the entire matrix of float32 values.

Also added some additional assertions in order to track down issues.

An additional problem as seen with N2630278 post-D37777979 is that calling reconstruct on a large flat index (one where there are more than 2^31 scalar elements in the index) results in int32 overflow error in the reconstruct kernel that would be called for a single vector or a contiguous range of vectors. Previously, this use case was handled by `cudaMemcpyAsync` using `size_t` etc. calculation, but now in order to handle float16 and float32 in the same manner, there is an explicit kernel to do the copy and conversion if needed, avoiding a separate copy then conversion. The error as seen in that notebook was a fault in the reconstruct by range kernel.

This kernel has been temporarily fixed to not have the int32 indexing problems. Since when Faiss GPU was written in 2016, GPU memories have become a lot larger and it now seems the time to support (u)int64 indexing everywhere. I am adding this minimal change for now to fix this fault but early next week I will do a pass over the entire Faiss GPU code to update to using `Index::idx_t` as the indexing type everywhere, which should remove problems in dealing with large datasets.

Reviewed By: mdouze

Differential Revision: D40355184

fbshipit-source-id: 78f8b5d5aebcba610d3cd46f2cb2d26276e0ff15
2022-10-14 19:00:40 -07:00
2022-09-28 11:22:43 -07:00
2022-09-28 11:22:43 -07:00

Faiss

Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU. It is developed primarily at Facebook AI Research.

News

See CHANGELOG.md for detailed information about latest features.

Introduction

Faiss contains several methods for similarity search. It assumes that the instances are represented as vectors and are identified by an integer, and that the vectors can be compared with L2 (Euclidean) distances or dot products. Vectors that are similar to a query vector are those that have the lowest L2 distance or the highest dot product with the query vector. It also supports cosine similarity, since this is a dot product on normalized vectors.

Some of the methods, like those based on binary vectors and compact quantization codes, solely use a compressed representation of the vectors and do not require to keep the original vectors. This generally comes at the cost of a less precise search but these methods can scale to billions of vectors in main memory on a single server. Other methods, like HNSW and NSG add an indexing structure on top of the raw vectors to make searching more efficient.

The GPU implementation can accept input from either CPU or GPU memory. On a server with GPUs, the GPU indexes can be used a drop-in replacement for the CPU indexes (e.g., replace IndexFlatL2 with GpuIndexFlatL2) and copies to/from GPU memory are handled automatically. Results will be faster however if both input and output remain resident on the GPU. Both single and multi-GPU usage is supported.

Installing

Faiss comes with precompiled libraries for Anaconda in Python, see faiss-cpu and faiss-gpu. The library is mostly implemented in C++, the only dependency is a BLAS implementation. Optional GPU support is provided via CUDA, and the Python interface is also optional. It compiles with cmake. See INSTALL.md for details.

How Faiss works

Faiss is built around an index type that stores a set of vectors, and provides a function to search in them with L2 and/or dot product vector comparison. Some index types are simple baselines, such as exact search. Most of the available indexing structures correspond to various trade-offs with respect to

  • search time
  • search quality
  • memory used per index vector
  • training time
  • adding time
  • need for external data for unsupervised training

The optional GPU implementation provides what is likely (as of March 2017) the fastest exact and approximate (compressed-domain) nearest neighbor search implementation for high-dimensional vectors, fastest Lloyd's k-means, and fastest small k-selection algorithm known. The implementation is detailed here.

Full documentation of Faiss

The following are entry points for documentation:

Authors

The main authors of Faiss are:

Reference

Reference to cite when you use Faiss in a research paper:

@article{johnson2019billion,
  title={Billion-scale similarity search with {GPUs}},
  author={Johnson, Jeff and Douze, Matthijs and J{\'e}gou, Herv{\'e}},
  journal={IEEE Transactions on Big Data},
  volume={7},
  number={3},
  pages={535--547},
  year={2019},
  publisher={IEEE}
}

Join the Faiss community

For public discussion of Faiss or for questions, there is a Facebook group at https://www.facebook.com/groups/faissusers/

We monitor the issues page of the repository. You can report bugs, ask questions, etc.

License

Faiss is MIT-licensed.

Description
A library for efficient similarity search and clustering of dense vectors.
Readme 212 MiB
Languages
C++ 59.7%
Python 19.6%
Cuda 16.8%
C 1.9%
CMake 0.9%
Other 1.1%