Summary:
This diff implemented non-uniform quantization of vector norms in additive quantizers. index_factory and I/O are supported.
index_factory: `XXX_Ncqint{nbits}` where `nbits` is the number of bits to quantize vector norm.
For 8 bits code, it is almost the same as 8-bit uniform quantization. It will slightly improve the accuracy if the code size is less than 8 bits.
```
RQ4x8_Nqint8: R@1 0.1116
RQ4x8_Ncqint8: R@1 0.1117
RQ4x8_Nqint4: R@1 0.0901
RQ4x8_Ncqint4: R@1 0.0989
```
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/2037
Test Plan:
buck test //faiss/tests/:test_clustering -- TestClustering1D
buck test //faiss/tests/:test_lsq -- test_index_accuracy_cqint
buck test //faiss/tests/:test_residual_quantizer -- test_norm_cqint
buck test //faiss/tests/:test_residual_quantizer -- test_search_L2
Reviewed By: beauby
Differential Revision: D31083476
Pulled By: mdouze
fbshipit-source-id: f34c3dafc4eb1c6f44a63e68137158911aa4a2f4
Summary:
## Description
This PR added support for LSQ on GPU. Only the encoding part is running on GPU and the others are still running on CPU.
Multi-GPU is also supported.
## Usage
``` python
lsq = faiss.LocalSearchQuantizer(d, M, nbits)
ngpus = faiss.get_num_gpus()
lsq.icm_encoder_factory = faiss.GpuIcmEncoderFactory(ngpus) # we use all gpus
lsq.train(xt)
codes = lsq.compute_codes(xb)
decoded = lsq.decode(codes)
```
## Performance on SIFT1M
On 1 GPU:
```
===== lsq-gpu:
mean square error = 17337.878528
training time: 40.9857234954834 s
encoding time: 27.12640070915222 s
```
On 2 GPUs:
```
===== lsq-gpu:
mean square error = 17364.658176
training time: 25.832106113433838 s
encoding time: 14.879548072814941 s
```
On CPU:
```
===== lsq:
mean square error = 17305.880576
training time: 152.57522344589233 s
encoding time: 110.01779270172119 s
```
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1978
Test Plan: buck test mode/dev-nosan //faiss/gpu/test/:test_gpu_index_py -- TestLSQIcmEncoder
Reviewed By: wickedfoo
Differential Revision: D29609763
Pulled By: mdouze
fbshipit-source-id: b6ffa2a3c02bf696a4e52348132affa0dd838870
Summary:
## Description
The process of updating the codebook in LSQ may be unstable if the data is not zero-centering. This diff fixed it by using `double` instead of `float` during codebook updating. This would not affect the performance since the update process is quite fast.
Users could switch back to `float` mode by setting `update_codebooks_with_double = False`
## Changes
1. Support `double` during codebook updating.
2. Add a unit test.
3. Add `__init__.py` under `contrib/` to avoid warnings.
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1975
Reviewed By: wickedfoo
Differential Revision: D29565632
Pulled By: mdouze
fbshipit-source-id: 932d7932ae9725c299cd83f87495542703ad6654
Summary:
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1906
This PR implemented LSQ/LSQ++, a vector quantization technique described in the following two papers:
1. Revisiting additive quantization
2. LSQ++: Lower running time and higher recall in multi-codebook quantization
Here is a benchmark running on SIFT1M for 64 bits encoding:
```
===== lsq:
mean square error = 17335.390208
training time: 312.729779958725 s
encoding time: 244.6277096271515 s
===== pq:
mean square error = 23743.004672
training time: 1.1610801219940186 s
encoding time: 2.636141061782837 s
===== rq:
mean square error = 20999.737344
training time: 31.813055515289307 s
encoding time: 307.51959800720215 s
```
Changes:
1. Add LocalSearchQuantizer object
2. Fix an out of memory bug in ResidualQuantizer
3. Add a benchmark for evaluating quantizers
4. Add tests for LocalSearchQuantizer
Pull Request resolved: https://github.com/facebookresearch/faiss/pull/1862
Test Plan:
```
buck test //faiss/tests/:test_lsq
buck run mode/opt //faiss/benchs/:bench_quantizer -- lsq pq rq
```
Reviewed By: beauby
Differential Revision: D28376369
Pulled By: mdouze
fbshipit-source-id: 2a394d38bf75b9de0a1c2cd6faddf7dd362a6fa8