A library for efficient similarity search and clustering of dense vectors.
 
 
 
 
 
 
Go to file
Jeff Johnson 32df3f3198 GPU IVFPQ nbits != 8 support (#1576)
Summary:
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1576

This diff updates the simple SIMD interleaved add and search kernels I added in D24064745 to support an arbitrary number of bits per PQ code, bitwise packed (just as the CPU). Right now only nbits of 4, 5, 6, 8 are supported, but I will also update to support 7 bits before checking in, and the framework exists for any other value that we want (even > 8 bits) later on. For nbits != 8, copy to/from the packed CPU format is also supported. This new functionality is still experimental and is opt-in at the moment when interleaved codes are enabled in the config object, though after a few more subsequent diffs it will become the default and all the old code/kernels will be deleted.

Since I originally wrote the GPU IVFPQ training code in 2016, the CPU version had a subsampling step added, which I was missing in the GPU version, and was causing non-reproducibility between the CPU and GPU when trained on the same data (if subsampling was required). This step has been added to the GPU version as well.

This diff also unifies the IVF add and search kernels somewhat between IVFPQ and IVFFlat/IVFSQ for SIMD interleaved codes. This required adding an additional step to IVFSQ to calculate distances in a separate kernel instead of a fused kernel (much as IVFPQ does with its separate kernels to determine the codes).

From here it shouldn't be too difficult to create a primitive version of the in-register IVFPQ list scanning code that I will iterate on :)

Reviewed By: mdouze

Differential Revision: D25597421

fbshipit-source-id: cbcf1a2b79d92cc007ab95428057f9de643baf1a
2020-12-17 07:36:30 -08:00
.circleci Optimized SIMD interleaved IVF flat/SQ implementation (#1566) 2020-12-15 21:17:56 -08:00
.github Update ISSUE_TEMPLATE.md (#1462) 2020-10-20 04:37:50 -07:00
benchs PQ4 fast scan benchmarks (#1555) 2020-12-16 01:18:58 -08:00
c_api Faster brute force search (#1502) 2020-11-04 22:16:23 -08:00
cmake Move from TravisCI to CircleCI (#1315) 2020-08-15 04:00:51 -07:00
conda Implementation of PQ4 search with SIMD instructions (#1542) 2020-12-03 10:06:38 -08:00
contrib PQ4 fast scan benchmarks (#1555) 2020-12-16 01:18:58 -08:00
demos Fix faiss_contrib (#1478) 2020-10-20 04:35:19 -07:00
faiss GPU IVFPQ nbits != 8 support (#1576) 2020-12-17 07:36:30 -08:00
misc Get rid of non-portable drand48. (#1349) 2020-08-24 00:42:21 -07:00
tests PQ4 fast scan benchmarks (#1555) 2020-12-16 01:18:58 -08:00
tutorial fix tutorial files 2020-09-17 07:32:56 -07:00
.dockerignore
.gitignore
CMakeLists.txt CMake: use GNUInstallDirs instead of hardcoded paths. (#1541) 2020-11-24 23:10:06 -08:00
CODE_OF_CONDUCT.md
CONTRIBUTING.md
Dockerfile
Doxyfile Get rid of stale generated docs. 2020-10-12 00:33:37 -07:00
INSTALL.md Update INSTALL.md (#1456) 2020-12-15 15:00:25 -08:00
LICENSE
README.md Update README.md 2020-03-30 09:20:19 +02:00

README.md

Faiss

Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU. It is developed by Facebook AI Research.

NEWS

NEW: version 1.6.3 (2020-03-27) IndexBinaryHash, GPU support for alternative distances.

NEW: version 1.6.1 (2019-11-29) bugfix.

NEW: version 1.6.0 (2019-10-15) code structure reorg, support for codec interface.

NEW: version 1.5.3 (2019-06-24) fix performance regression in IndexIVF.

NEW: version 1.5.2 (2019-05-27) the license was relaxed to MIT from BSD+Patents. Read LICENSE for details.

NEW: version 1.5.0 (2018-12-19) GPU binary flat index and binary HNSW index

NEW: version 1.4.0 (2018-08-30) no more crashes in pure Python code

NEW: version 1.3.0 (2018-07-12) support for binary indexes

NEW: latest commit (2018-02-22) supports on-disk storage of inverted indexes, see demos/demo_ondisk_ivf.py

NEW: latest commit (2018-01-09) includes an implementation of the HNSW indexing method, see benchs/bench_hnsw.py

NEW: there is now a Facebook public discussion group for Faiss users at https://www.facebook.com/groups/faissusers/

NEW: on 2017-07-30, the license on Faiss was relaxed to BSD from CC-BY-NC. Read LICENSE for details.

Introduction

Faiss contains several methods for similarity search. It assumes that the instances are represented as vectors and are identified by an integer, and that the vectors can be compared with L2 (Euclidean) distances or dot products. Vectors that are similar to a query vector are those that have the lowest L2 distance or the highest dot product with the query vector. It also supports cosine similarity, since this is a dot product on normalized vectors.

Most of the methods, like those based on binary vectors and compact quantization codes, solely use a compressed representation of the vectors and do not require to keep the original vectors. This generally comes at the cost of a less precise search but these methods can scale to billions of vectors in main memory on a single server.

The GPU implementation can accept input from either CPU or GPU memory. On a server with GPUs, the GPU indexes can be used a drop-in replacement for the CPU indexes (e.g., replace IndexFlatL2 with GpuIndexFlatL2) and copies to/from GPU memory are handled automatically. Results will be faster however if both input and output remain resident on the GPU. Both single and multi-GPU usage is supported.

Building

The library is mostly implemented in C++, with optional GPU support provided via CUDA, and an optional Python interface. The CPU version requires a BLAS library. It compiles with a Makefile and can be packaged in a docker image. See INSTALL.md for details.

How Faiss works

Faiss is built around an index type that stores a set of vectors, and provides a function to search in them with L2 and/or dot product vector comparison. Some index types are simple baselines, such as exact search. Most of the available indexing structures correspond to various trade-offs with respect to

  • search time
  • search quality
  • memory used per index vector
  • training time
  • need for external data for unsupervised training

The optional GPU implementation provides what is likely (as of March 2017) the fastest exact and approximate (compressed-domain) nearest neighbor search implementation for high-dimensional vectors, fastest Lloyd's k-means, and fastest small k-selection algorithm known. The implementation is detailed here.

Full documentation of Faiss

The following are entry points for documentation:

Authors

The main authors of Faiss are:

Reference

Reference to cite when you use Faiss in a research paper:

@article{JDH17,
  title={Billion-scale similarity search with GPUs},
  author={Johnson, Jeff and Douze, Matthijs and J{\'e}gou, Herv{\'e}},
  journal={arXiv preprint arXiv:1702.08734},
  year={2017}
}

Join the Faiss community

For public discussion of Faiss or for questions, there is a Facebook group at https://www.facebook.com/groups/faissusers/

We monitor the issues page of the repository. You can report bugs, ask questions, etc.

License

Faiss is MIT-licensed.