Summary: In the GPU IVF (Flat, SQ and PQ) code, there is a requirement for using temporary memory for storing unfiltered (or partially filtered) vector distances calculated during list scanning which are k-selected by separate kernels. While a batch query may be presented to an IVF index, the amount of temporary memory needed to store all these unfiltered distances prior to filtering may be very huge depending upon IVF characteristics (such as the maximum number of vectors encoded in any of the IVF lists), in which case we cannot process the entire batch of queries at once and instead must tile over the batch of queries to reuse the temporary memory that we make available for these distances. The old code duplicated this roughly equivalent logic in 3 different places (the IVFFlat/SQ code, IVFPQ with precomputed codes, and IVFPQ without precomputed codes). Furthermore, in the case where either little/no temporary memory was available or where what temporary memory was available was (vastly) exceeded by the amount needed to handle a particular query, the old code enforced a minimum number of queries to be processed at once of 8. In certain cases (huge IVF list imbalance), this memory request could exceed the amount of memory that can be safely allocated on a GPU. This diff consolidates the original 3 separate places where this calculation took place to 1 place in IVFUtils. The logic proceeds roughly as before, to figure out how many queries can be processed in the available temporary memory, except we add a new heuristic in the case where the number of queries that can be concurrently processed falls below 8. This could be either due to little temporary memory being available, or due to huge memory requirements. In this case, we instead ignore the amount of temporary memory available and instead see how many queries' memory requirements would fit into a single 512 MiB memory allocation, so we reasonably cap this amount. If the query still cannot be satisfied with this allocation, we still proceed executing 1 query at a time (which note could still potentially exhaust the GPU memory, but this is an error that is unavoidable). While a different heuristic using the amount of actual memory allocatable on the device could be used instead of this fixed 512 MiB amount, there is no guarantee to my knowledge that a single cudaMalloc up to this limit could succeed (e.g., GPU reports 3 GiB available, you attempt to allocate all of that in a single allocation), so we just pick an amount which is a reasonable balance between efficiency (parallelism) and memory consumption. Note that if not enough temporary memory is available and a single 512 MiB allocation fails, then there is likely little memory to proceed efficiently at all under any scenario, as Faiss does require some headroom in terms of memory available for scratch spaces. Reviewed By: mdouze Differential Revision: D45574455 fbshipit-source-id: 08f5204e3e9656627c9134d7409b9b0960f07b2d |
||
---|---|---|
.circleci | ||
.github | ||
benchs | ||
c_api | ||
cmake | ||
conda | ||
contrib | ||
demos | ||
faiss | ||
misc | ||
tests | ||
tutorial | ||
.clang-format | ||
.dockerignore | ||
.gitignore | ||
CHANGELOG.md | ||
CMakeLists.txt | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
Doxyfile | ||
INSTALL.md | ||
LICENSE | ||
README.md |
README.md
Faiss
Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU. It is developed primarily at Meta's Fundamental AI Research group.
News
See CHANGELOG.md for detailed information about latest features.
Introduction
Faiss contains several methods for similarity search. It assumes that the instances are represented as vectors and are identified by an integer, and that the vectors can be compared with L2 (Euclidean) distances or dot products. Vectors that are similar to a query vector are those that have the lowest L2 distance or the highest dot product with the query vector. It also supports cosine similarity, since this is a dot product on normalized vectors.
Some of the methods, like those based on binary vectors and compact quantization codes, solely use a compressed representation of the vectors and do not require to keep the original vectors. This generally comes at the cost of a less precise search but these methods can scale to billions of vectors in main memory on a single server. Other methods, like HNSW and NSG add an indexing structure on top of the raw vectors to make searching more efficient.
The GPU implementation can accept input from either CPU or GPU memory. On a server with GPUs, the GPU indexes can be used a drop-in replacement for the CPU indexes (e.g., replace IndexFlatL2
with GpuIndexFlatL2
) and copies to/from GPU memory are handled automatically. Results will be faster however if both input and output remain resident on the GPU. Both single and multi-GPU usage is supported.
Installing
Faiss comes with precompiled libraries for Anaconda in Python, see faiss-cpu and faiss-gpu. The library is mostly implemented in C++, the only dependency is a BLAS implementation. Optional GPU support is provided via CUDA, and the Python interface is also optional. It compiles with cmake. See INSTALL.md for details.
How Faiss works
Faiss is built around an index type that stores a set of vectors, and provides a function to search in them with L2 and/or dot product vector comparison. Some index types are simple baselines, such as exact search. Most of the available indexing structures correspond to various trade-offs with respect to
- search time
- search quality
- memory used per index vector
- training time
- adding time
- need for external data for unsupervised training
The optional GPU implementation provides what is likely (as of March 2017) the fastest exact and approximate (compressed-domain) nearest neighbor search implementation for high-dimensional vectors, fastest Lloyd's k-means, and fastest small k-selection algorithm known. The implementation is detailed here.
Full documentation of Faiss
The following are entry points for documentation:
- the full documentation can be found on the wiki page, including a tutorial, a FAQ and a troubleshooting section
- the doxygen documentation gives per-class information extracted from code comments
- to reproduce results from our research papers, Polysemous codes and Billion-scale similarity search with GPUs, refer to the benchmarks README. For Link and code: Fast indexing with graphs and compact regression codes, see the link_and_code README
Authors
The main authors of Faiss are:
- Hervé Jégou initiated the Faiss project and wrote its first implementation
- Matthijs Douze implemented most of the CPU Faiss
- Jeff Johnson implemented all of the GPU Faiss
- Lucas Hosseini implemented the binary indexes and the build system
- Chengqi Deng implemented NSG, NNdescent and much of the additive quantization code.
- Alexandr Guzhva many optimizations: SIMD, memory allocation and layout, fast decoding kernels for vector codecs, etc.
Reference
Reference to cite when you use Faiss in a research paper:
@article{johnson2019billion,
title={Billion-scale similarity search with {GPUs}},
author={Johnson, Jeff and Douze, Matthijs and J{\'e}gou, Herv{\'e}},
journal={IEEE Transactions on Big Data},
volume={7},
number={3},
pages={535--547},
year={2019},
publisher={IEEE}
}
Join the Faiss community
For public discussion of Faiss or for questions, there is a Facebook group at https://www.facebook.com/groups/faissusers/
We monitor the issues page of the repository. You can report bugs, ask questions, etc.
Legal
Faiss is MIT-licensed, refer to the LICENSE file in the top level directory.
Copyright © Meta Platforms, Inc. See the Terms of Use and Privacy Policy for this project.