zh Wang 159641a06c Fix bug of HNSW (#2771)
Summary: Pull Request resolved: https://github.com/facebookresearch/faiss/pull/2771

Test Plan:
A DEFINITE BUG. Fixing it improves the recall rate.

Reproduce with the following code:
```
MinimaxHeap heap(1);
heap.push(1, 1.0);

float v1 = 0;
heap.pop(&v1);

heap.push(1, 1.0);
assert(heap.nvalid == 1);
```

Baseline
```
[aguzhva@devgpu005.ftw6 ~/fbsource/buck-out/v2/gen/fbcode/faiss/benchs (064c246e0)]$ taskset -c 72-95 ./bench_hnsw.par 16 hnsw
load data
Testing HNSW Flat
add
hnsw_add_vertices: adding 1000000 elements on top of 0 (preset_levels=0)
  max_level = 4
Adding 1 elements at level 4
Adding 26 elements at level 3
Adding 951 elements at level 2
Adding 30276 elements at level 1
Adding 968746 elements at level 0
Done in 14893.718 ms
search
efSearch 16 bounded queue True 	   0.007 ms per query, R@1 0.8823, missing rate 0.0000
efSearch 16 bounded queue False 	   0.008 ms per query, R@1 0.9120, missing rate 0.0000
efSearch 32 bounded queue True 	   0.012 ms per query, R@1 0.9565, missing rate 0.0000
efSearch 32 bounded queue False 	   0.011 ms per query, R@1 0.9641, missing rate 0.0000
efSearch 64 bounded queue True 	   0.018 ms per query, R@1 0.9889, missing rate 0.0000
efSearch 64 bounded queue False 	   0.019 ms per query, R@1 0.9896, missing rate 0.0000
efSearch 128 bounded queue True 	   0.036 ms per query, R@1 0.9970, missing rate 0.0000
efSearch 128 bounded queue False 	   0.037 ms per query, R@1 0.9970, missing rate 0.0000
efSearch 256 bounded queue True 	   0.062 ms per query, R@1 0.9991, missing rate 0.0000
efSearch 256 bounded queue False 	   0.067 ms per query, R@1 0.9991, missing rate 0.0000

[aguzhva@devgpu005.ftw6 ~/fbsource/buck-out/v2/gen/fbcode/faiss/benchs (fc6e9b938|remote/fbsource/stable...)]$ taskset -c 72-95 ./bench_hnsw.par 4 hnsw_sq
load data
Testing HNSW with a scalar quantizer
training
add
hnsw_add_vertices: adding 1000000 elements on top of 0 (preset_levels=0)
  max_level = 5
Adding 1 elements at level 5
Adding 15 elements at level 4
Adding 194 elements at level 3
Adding 3693 elements at level 2
Adding 58500 elements at level 1
Adding 937597 elements at level 0
Done in 8900.962 ms
search
efSearch 16 	   0.003 ms per query, R@1 0.7365, missing rate 0.0000
efSearch 32 	   0.006 ms per query, R@1 0.8712, missing rate 0.0000
efSearch 64 	   0.011 ms per query, R@1 0.9415, missing rate 0.0000
efSearch 128 	   0.018 ms per query, R@1 0.9778, missing rate 0.0000
efSearch 256 	   0.036 ms per query, R@1 0.9917, missing rate 0.0000
```

Candidate:
```
[aguzhva@devgpu005.ftw6 ~/fbsource/buck-out/v2/gen/fbcode/faiss/benchs (064c246e0)]$ taskset -c 72-95 ./bench_hnsw.par 16 hnsw
load data
I0408 09:45:20.949554 3024612 ContainerResourceMonitor.cpp:68] devserver cgroup2Reader creation is successful
Testing HNSW Flat
add
hnsw_add_vertices: adding 1000000 elements on top of 0 (preset_levels=0)
  max_level = 4
Adding 1 elements at level 4
Adding 26 elements at level 3
Adding 951 elements at level 2
Adding 30276 elements at level 1
Adding 968746 elements at level 0
Done in 14243.637 ms
search
efSearch 16 bounded queue True 	   0.006 ms per query, R@1 0.9122, missing rate 0.0000
efSearch 16 bounded queue False 	   0.006 ms per query, R@1 0.9122, missing rate 0.0000
efSearch 32 bounded queue True 	   0.011 ms per query, R@1 0.9643, missing rate 0.0000
efSearch 32 bounded queue False 	   0.011 ms per query, R@1 0.9644, missing rate 0.0000
efSearch 64 bounded queue True 	   0.018 ms per query, R@1 0.9880, missing rate 0.0000
efSearch 64 bounded queue False 	   0.020 ms per query, R@1 0.9880, missing rate 0.0000
efSearch 128 bounded queue True 	   0.036 ms per query, R@1 0.9969, missing rate 0.0000
efSearch 128 bounded queue False 	   0.035 ms per query, R@1 0.9969, missing rate 0.0000
efSearch 256 bounded queue True 	   0.064 ms per query, R@1 0.9994, missing rate 0.0000
efSearch 256 bounded queue False 	   0.062 ms per query, R@1 0.9994, missing rate 0.0000

[aguzhva@devgpu005.ftw6 ~/fbsource/buck-out/v2/gen/fbcode/faiss/benchs (6de3a2d76)]$ taskset -c 72-95 ./bench_hnsw.par 4 hnsw_sq
load data
Testing HNSW with a scalar quantizer
training
add
hnsw_add_vertices: adding 1000000 elements on top of 0 (preset_levels=0)
  max_level = 5
Adding 1 elements at level 5
Adding 15 elements at level 4
Adding 194 elements at level 3
Adding 3693 elements at level 2
Adding 58500 elements at level 1
Adding 937597 elements at level 0
Done in 8451.601 ms
search
efSearch 16 	   0.004 ms per query, R@1 0.8025, missing rate 0.0000
efSearch 32 	   0.006 ms per query, R@1 0.8925, missing rate 0.0000
efSearch 64 	   0.011 ms per query, R@1 0.9480, missing rate 0.0000
efSearch 128 	   0.019 ms per query, R@1 0.9793, missing rate 0.0000
efSearch 256 	   0.035 ms per query, R@1 0.9919, missing rate 0.0000

```

Reviewed By: mdouze

Differential Revision: D44815702

Pulled By: alexanderguzhva

fbshipit-source-id: ca7c7e83a6560316af543bde125ac703bf2e1dac
2023-04-11 08:32:06 -07:00
2023-04-06 06:25:51 -07:00
2023-03-24 09:25:48 -07:00
2023-04-06 06:25:51 -07:00
2023-02-07 14:32:56 -08:00
2023-04-11 08:32:06 -07:00
2023-04-06 06:25:51 -07:00

Faiss

Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU. It is developed primarily at Meta's Fundamental AI Research group.

News

See CHANGELOG.md for detailed information about latest features.

Introduction

Faiss contains several methods for similarity search. It assumes that the instances are represented as vectors and are identified by an integer, and that the vectors can be compared with L2 (Euclidean) distances or dot products. Vectors that are similar to a query vector are those that have the lowest L2 distance or the highest dot product with the query vector. It also supports cosine similarity, since this is a dot product on normalized vectors.

Some of the methods, like those based on binary vectors and compact quantization codes, solely use a compressed representation of the vectors and do not require to keep the original vectors. This generally comes at the cost of a less precise search but these methods can scale to billions of vectors in main memory on a single server. Other methods, like HNSW and NSG add an indexing structure on top of the raw vectors to make searching more efficient.

The GPU implementation can accept input from either CPU or GPU memory. On a server with GPUs, the GPU indexes can be used a drop-in replacement for the CPU indexes (e.g., replace IndexFlatL2 with GpuIndexFlatL2) and copies to/from GPU memory are handled automatically. Results will be faster however if both input and output remain resident on the GPU. Both single and multi-GPU usage is supported.

Installing

Faiss comes with precompiled libraries for Anaconda in Python, see faiss-cpu and faiss-gpu. The library is mostly implemented in C++, the only dependency is a BLAS implementation. Optional GPU support is provided via CUDA, and the Python interface is also optional. It compiles with cmake. See INSTALL.md for details.

How Faiss works

Faiss is built around an index type that stores a set of vectors, and provides a function to search in them with L2 and/or dot product vector comparison. Some index types are simple baselines, such as exact search. Most of the available indexing structures correspond to various trade-offs with respect to

  • search time
  • search quality
  • memory used per index vector
  • training time
  • adding time
  • need for external data for unsupervised training

The optional GPU implementation provides what is likely (as of March 2017) the fastest exact and approximate (compressed-domain) nearest neighbor search implementation for high-dimensional vectors, fastest Lloyd's k-means, and fastest small k-selection algorithm known. The implementation is detailed here.

Full documentation of Faiss

The following are entry points for documentation:

Authors

The main authors of Faiss are:

  • Hervé Jégou initiated the Faiss project and wrote its first implementation
  • Matthijs Douze implemented most of the CPU Faiss
  • Jeff Johnson implemented all of the GPU Faiss
  • Lucas Hosseini implemented the binary indexes and the build system
  • Chengqi Deng implemented NSG, NNdescent and much of the additive quantization code.
  • Alexandr Guzhva many optimizations: SIMD, memory allocation and layout, fast decoding kernels for vector codecs, etc.

Reference

Reference to cite when you use Faiss in a research paper:

@article{johnson2019billion,
  title={Billion-scale similarity search with {GPUs}},
  author={Johnson, Jeff and Douze, Matthijs and J{\'e}gou, Herv{\'e}},
  journal={IEEE Transactions on Big Data},
  volume={7},
  number={3},
  pages={535--547},
  year={2019},
  publisher={IEEE}
}

Join the Faiss community

For public discussion of Faiss or for questions, there is a Facebook group at https://www.facebook.com/groups/faissusers/

We monitor the issues page of the repository. You can report bugs, ask questions, etc.

Faiss is MIT-licensed, refer to the LICENSE file in the top level directory.

Copyright © Meta Platforms, Inc. See the Terms of Use and Privacy Policy for this project.

Description
A library for efficient similarity search and clustering of dense vectors.
Readme 212 MiB
Languages
C++ 59.7%
Python 19.6%
Cuda 16.8%
C 1.9%
CMake 0.9%
Other 1.1%