Summary: When running in a heavily parallelized env, the test becomes very slow and causes timeouts. Here we reduce the nb of threads.
Reviewed By: wickedfoo, Ben0mega
Differential Revision: D25921771
fbshipit-source-id: 1e0aacbb3e4f6e8f33ec893984b343eb5a610424
Summary:
This avoids triggering the following warnings:
```
tests/test_ondisk_ivf.cpp:36:24: warning: 'tempnam' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of tempnam(3), it is highly recommended that you use mkstemp(3) instead. [-Wdeprecated-declarations]
char *cfname = tempnam (nullptr, prefix);
^
tests/test_merge.cpp:34:24: warning: 'tempnam' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of tempnam(3), it is highly recommended that you use mkstemp(3) instead. [-Wdeprecated-declarations]
char *cfname = tempnam (nullptr, prefix);
```
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1596
Reviewed By: wickedfoo
Differential Revision: D25710654
Pulled By: beauby
fbshipit-source-id: 2aa027c3b32f6cf7f41eb55360424ada6d200901
Summary:
Added a few functions in contrib to:
- run range searches by batches on the query or the database side
- emulate range search on GPU: search on GPU with k=1024, if the farthest neighbor is still within range, re-perform search on CPU
- as reference implementations for precision-recall on range search datasets
- optimized code to plot precision-recall plots (ie. sweep over thresholds)
The new functions are mainly in a new `evaluation.py`
Reviewed By: wickedfoo
Differential Revision: D25627619
fbshipit-source-id: 58f90654c32c925557d7bbf8083efbb710712e03
Summary:
IndexPQ and IndexIVFPQ implementations with AVX shuffle instructions.
The training and computing of the codes does not change wrt. the original PQ versions but the code layout is "packed" so that it can be used efficiently by the SIMD computation kernels.
The main changes are:
- new IndexPQFastScan and IndexIVFPQFastScan objects
- simdib.h for an abstraction above the AVX2 intrinsics
- BlockInvertedLists for invlists that are 32-byte aligned and where codes are not sequential
- pq4_fast_scan.h/.cpp: for packing codes and look-up tables + optmized distance comptuation kernels
- simd_result_hander.h: SIMD version of result collection in heaps / reservoirs
Misc changes:
- added contrib.inspect_tools to access fields in C++ objects
- moved .h and .cpp code for inverted lists to an invlists/ subdirectory, and made a .h/.cpp for InvertedListsIOHook
- added a new inverted lists type with 32-byte aligned codes (for consumption by SIMD)
- moved Windows-specific intrinsics to platfrom_macros.h
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1542
Test Plan:
```
buck test mode/opt -j 4 //faiss/tests/:test_fast_scan_ivf //faiss/tests/:test_fast_scan
buck test mode/opt //faiss/manifold/...
```
Reviewed By: wickedfoo
Differential Revision: D25175439
Pulled By: mdouze
fbshipit-source-id: ad1a40c0df8c10f4b364bdec7172e43d71b56c34
Summary:
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1531
vector_to_array assumes that long is 64 bit. Fix this and test it.
Reviewed By: wickedfoo
Differential Revision: D25022363
fbshipit-source-id: f51f723d590d71ee5ef39e3f86ef69426df833fa
Summary:
The tests TestPQTables are very slow in dev mode with BLAS. This seems to be due to the training operation of the PQ. However, since it does not matter if the training is accurate or not, we can just reduce the nb of training iterations from the default 25 to 4.
Still unclear why this happens, because the runtime is spent in BLAS, which should be independend of mode/opt or mode/dev.
Reviewed By: wickedfoo
Differential Revision: D24783752
fbshipit-source-id: 38077709eb9a6432210c11c3040765e139353ae8
Summary:
This diff streamlines the code that collects results for brute force distance computations for the L2 / IP and range search / knn search combinations.
It introduces a `ResultHandler` template class that abstracts what happens with the computed distances and ids. In addition to the heap result handler and the range search result handler, it introduces a reservoir result handler that improves the search speed for large k (>=100).
Benchmark results (https://fb.quip.com/y0g1ACLEqJXx#OCaACA2Gm45) show that on small datasets (10k) search is 10-50% faster (improvements are larger for small k). There is room for improvement in the reservoir implementation, whose implementation is quite naive currently, but the diff is already useful in its current form.
Experiments on precomputed db vector norms for L2 distance computations were not very concluding performance-wise, so the implementation is removed from IndexFlatL2.
This diff also removes IndexL2BaseShift, which was never used.
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1502
Test Plan:
```
buck test //faiss/tests/:test_product_quantizer
buck test //faiss/tests/:test_index -- TestIndexFlat
```
Reviewed By: wickedfoo
Differential Revision: D24705464
Pulled By: mdouze
fbshipit-source-id: 270e10b19f3c89ed7b607ec30549aca0ac5027fe
Summary: When an INNER_PRODUCT index is used for clustering, higher objective is better, so when redoing clusterings the highest objective should be retained (not the lowest). This diff fixes this and adds a test.
Reviewed By: wickedfoo
Differential Revision: D24701894
fbshipit-source-id: b9ec224cf8f4ffdfd2b8540ce37da43386a27b7a
Summary:
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1484
This diff allows for native usage of PyTorch tensors for Faiss indexes on both CPU and GPU. It is currently only implemented in this diff for things that inherit from `faiss.Index`, which covers the non-binary indices, and it patches the same functions on `faiss.Index` that were also covered by `__init__.py` for numpy interoperability.
There must be uniformity among the inputs: if any array input is a Torch tensor, then all array inputs must be Torch tensors. Similarly, if any array input is a numpy ndarray, then all array inputs must be numpy ndarrays.
If `faiss.contrib.torch_utils` is imported, it ensures that `import faiss` has already been performed to patch all of the functions using the base `__init__.py` numpy wrappers, and then patches the following functions again:
```
add
add_with_ids
assign
train
search
remove_ids
reconstruct
reconstruct_n
range_search
update_vectors
search_and_reconstruct
sa_encode
sa_decode
```
to allow usage of PyTorch CPU tensors, and additionally PyTorch GPU tensors if the index being used is on the GPU.
numpy functionality is still available when `faiss.contrib.torch_utils` is imported; we pass through to the original patched numpy function when we detect numpy inputs.
In addition, to allow for better (asynchronous) GPU usage without requiring the CPU to be involved, all of these functions which construct tensors/arrays for output now take optional arguments for storage (numpy or torch.Tensor) to be provided that will contain the output data. `range_search` is the only exception to this, as the size of the output data is indeterminate. The eventual GPU implementation will likely require the user to provide a maximum cap on the output size, and allow that to be passed instead. If the optional pre-allocated output values are presented by the user, they are used; otherwise, new return ndarray / Tensors are constructed as before and used for the return. If this feature were not provided on the GPU, then every execution would be completely serial as we would depend upon the CPU to allocate GPU memory before every operation. Instead, now this can function much like NN graph execution on the GPU, assuming that all of the data requirements are pre-allocated, so the execution will run at the full speed of the GPU and not be stalled sequentially launching kernels.
This diff also exposes the `GpuResources` shared_ptr object owned by a GPU index. This is required for pytorch GPU so that we can perform proper stream ordering in Faiss with respect to the current pytorch stream. So, Faiss indices now perform more or less as any NN operation in Torch does.
Note, however, that a Faiss index has its own setting on current device, and if the pytorch GPU tensor inputs are resident on a different device than what the Faiss index expects, a cross-device copy will be initiated. I may choose to make this an error in the future and require matching device to device.
This diff also found a bug when passing GPU data directly to `train()` for `GpuIndexIVFFlat` and `GpuIndexIVFScalarQuantizer`, as I guess we never tested passing GPU data directly to these functions before. `GpuIndexIVFPQ` was doing the right thing however.
The assign function is now also implemented on the GPU as well, and is now marked `const` to be in line with the `search` function.
Also added better checking of non-contiguous inputs for both Torch tensors and numpy ndarrays.
Updated the `knn_gpu` function with a base implementation always present that allows for usage of numpy arrays, which is overridden when `torch_utils` is imported to allow torch usage. This supports row/column major layout, float32/float16 data and int64/int32 indices for both numpy and torch.
Reviewed By: mdouze
Differential Revision: D24299400
fbshipit-source-id: b4f117b9c120bd1ad83e7702087051ab7b303b29
Summary: The synthetic dataset can now have IP groundtruth
Reviewed By: wickedfoo
Differential Revision: D24219860
fbshipit-source-id: 42e094479311135e932821ac0a97ed0fb237bf78
Summary:
This diff adds a CombinedIndexSharded1T class to combined_index that uses the 30 shards from the Spark reducer.
The metadata is stored in pickle files on manifold.
Differential Revision: D24018824
fbshipit-source-id: be4ff8b38c3d6e1bb907e02b655d0e419b7a6fea
Summary:
Removed an unused function that caused compile errors in some configurations.
Added contrib function (exhaustive_search.knn) to compute the k nearest neighbors without constructing an index.
Renamed the equivalent GPU function as exhaustive_search.knn_gpu (it does not make much sense to mention numpy in the name as all functions take numpy arguments by default).
Reviewed By: beauby
Differential Revision: D24215427
fbshipit-source-id: 6d8e1eafa7c57593304b7b76f83b3015e4d2a2bb
Summary:
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1432
The contrib function knn_ground_truth does not provide exactly the same resutls on GPU and CPU (but relative accuracy is still 1e-7). This diff relaxes the constraint on CPU and added test on GPU.
Reviewed By: wickedfoo
Differential Revision: D24012199
fbshipit-source-id: aaa20dbdf42b876b3ed7da34028646dbb20833d3
Summary:
This diff fixes https://github.com/facebookresearch/faiss/issues/1412
There were various inconsistencies in how the shard and replica wrappers updated their internal state as the sub-indices were updated. This makes the two container classes work in the same way with similar synchronization functionality.
Reviewed By: beauby
Differential Revision: D23974186
fbshipit-source-id: c688c0c9124f823e4239aa2ff617b007b4564859
Summary:
This diff adds an object for a few useful dataset in faiss.contrib.
This includes synthetic datasets and the classic ones.
It is intended to work on:
- the FAIR cluster
- gluster
- manifold
Reviewed By: wickedfoo
Differential Revision: D23378763
fbshipit-source-id: 2437a7be9e712fd5ad1bccbe523cc1c936f7ab35
Summary:
`long` is 32 bits on windows and so is the default int type for numpy (eg. the one used for `np.arange`).
This diff explicitly specifies 64-bit ints for all occurrences where it matters.
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1381
Reviewed By: wickedfoo
Differential Revision: D23371232
Pulled By: mdouze
fbshipit-source-id: 220262cd70ee70379f83de93561a4eae71c94b04
Bugfixes:
- slow scanning of inverted lists (#836).
Features:
- add basic support for 6 new metrics in CPU `IndexFlat` and `IndexHNSW` (#848);
- add support for `IndexIDMap`/`IndexIDMap2` with binary indexes (#780).
Misc:
- throw python exception for OOM (#758);
- make `DistanceComputer` available for all random access indexes;
- gradually moving from `long` to `int64_t` for portability.
Changelog:
- changed license: BSD+Patents -> MIT
- propagates exceptions raised in sub-indexes of IndexShards and IndexReplicas
- support for searching several inverted lists in parallel (parallel_mode != 0)
- better support for PQ codes where nbit != 8 or 16
- IVFSpectralHash implementation: spectral hash codes inside an IVF
- 6-bit per component scalar quantizer (4 and 8 bit were already supported)
- combinations of inverted lists: HStackInvertedLists and VStackInvertedLists
- configurable number of threads for OnDiskInvertedLists prefetching (including 0=no prefetch)
- more test and demo code compatible with Python 3 (print with parentheses)
- refactored benchmark code: data loading is now in a single file
+ Add conda packages metadata (now building Faiss using conda's toolchain);
+ add Dockerfile for building conda packages (for all CUDA versions);
+ add working Dockerfile building faiss on Centos7;
+ simplify GPU build;
+ avoid falling back to CPU-only version (python);
+ simplify TravisCI config;
+ update INSTALL.md;
+ add configure flag for specifying target architectures (--with-cuda-arch);
+ fix Makefile for gpu tests;
+ fix various Makefile issues;
+ remove stale file (gpu/utils/DeviceUtils.cpp).
Facebook sync (Mar 2019)
- MatrixStats object
- option to round coordinates during k-means optimization
- alternative option for search in HNSW
- moved stats and imbalance_factor of IndexIVF to InvertedLists object
- range search for IVFScalarQuantizer
- direct unit8 codec in ScalarQuantizer
- renamed IndexProxy to IndexReplicas and moved to main Faiss
- better support for PQ code assignment with external index
- support for IMI2x16 (4B virtual centroids!)
- support for k = 2048 search on GPU (instead of 1024)
- most CUDA mem alloc failures throw exceptions instead of terminating on an assertion
- support for renaming an ondisk invertedlists
- interrupt computations with ctrl-C in python