parent
fe9395ef4d
commit
35d498eb73
44
README.rst
44
README.rst
|
@ -1,5 +1,20 @@
|
|||
Torchreid
|
||||
===========
|
||||
.. image:: https://img.shields.io/github/license/KaiyangZhou/deep-person-reid
|
||||
:alt: GitHub license
|
||||
:target: https://github.com/KaiyangZhou/deep-person-reid/blob/master/LICENSE
|
||||
|
||||
.. image:: https://img.shields.io/github/v/release/KaiyangZhou/deep-person-reid
|
||||
:alt: GitHub release (latest by date)
|
||||
|
||||
.. image:: https://img.shields.io/github/stars/KaiyangZhou/deep-person-reid
|
||||
:alt: GitHub stars
|
||||
:target: https://github.com/KaiyangZhou/deep-person-reid/stargazers
|
||||
|
||||
.. image:: https://img.shields.io/github/forks/KaiyangZhou/deep-person-reid
|
||||
:alt: GitHub forks
|
||||
:target: https://github.com/KaiyangZhou/deep-person-reid/network
|
||||
|
||||
Torchreid is a library built on `PyTorch <https://pytorch.org/>`_ for deep-learning person re-identification.
|
||||
|
||||
It features:
|
||||
|
@ -26,6 +41,8 @@ How-to instructions: https://kaiyangzhou.github.io/deep-person-reid/user_guide.
|
|||
|
||||
Model zoo: https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.
|
||||
|
||||
Tech report: https://arxiv.org/abs/1910.10093.
|
||||
|
||||
|
||||
Installation
|
||||
---------------
|
||||
|
@ -248,6 +265,7 @@ ReID-specific models
|
|||
- `PCB <https://arxiv.org/abs/1711.09349>`_
|
||||
- `MLFN <https://arxiv.org/abs/1803.09132>`_
|
||||
- `OSNet <https://arxiv.org/abs/1905.00953>`_
|
||||
- `OSNet-AIN <https://arxiv.org/abs/1910.06827>`_
|
||||
|
||||
Losses
|
||||
------
|
||||
|
@ -257,13 +275,27 @@ Losses
|
|||
|
||||
Citation
|
||||
---------
|
||||
If you find this code useful to your research, please cite the following publication.
|
||||
If you find this code useful to your research, please cite the following publications.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@article{zhou2019osnet,
|
||||
title={Omni-Scale Feature Learning for Person Re-Identification},
|
||||
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
|
||||
journal={arXiv preprint arXiv:1905.00953},
|
||||
|
||||
@article{torchreid,
|
||||
title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
|
||||
author={Zhou, Kaiyang and Xiang, Tao},
|
||||
journal={arXiv preprint arXiv:1910.10093},
|
||||
year={2019}
|
||||
}
|
||||
|
||||
@inproceedings{zhou2019osnet,
|
||||
title={Omni-Scale Feature Learning for Person Re-Identification},
|
||||
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
|
||||
booktitle={ICCV},
|
||||
year={2019}
|
||||
}
|
||||
|
||||
@article{zhou2019learning,
|
||||
title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},
|
||||
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
|
||||
journal={arXiv preprint arXiv:1910.06827},
|
||||
year={2019}
|
||||
}
|
||||
|
|
Before Width: | Height: | Size: 186 KiB After Width: | Height: | Size: 186 KiB |
|
@ -201,11 +201,11 @@ Visualize learning curves with tensorboard
|
|||
The ``SummaryWriter()`` for tensorboard will be automatically initialized in ``engine.run()`` when you are training your model. Therefore, you do not need to do extra jobs. After the training is done, the ``*tf.events*`` file will be saved in ``save_dir``. Then, you just call ``tensorboard --logdir=your_save_dir`` in your terminal and visit ``http://localhost:6006/`` in a web browser. See `pytorch tensorboard <https://pytorch.org/docs/stable/tensorboard.html>`_ for further information.
|
||||
|
||||
|
||||
Visualize ranked results
|
||||
-------------------------
|
||||
Ranked images can be visualized by setting ``visrank`` to true in ``engine.run()``. ``visrank_topk`` determines the top-k images to be visualized (Default is ``visrank_topk=10``). Note that ``visrank`` can only be used in test mode, i.e. ``test_only=True`` in ``engine.run()``. The images will be saved under ``save_dir/visrank_DATASETNAME`` where each image contains the top-k ranked list given a query. An example is shown below. Red and green denote incorrect and correct matches respectively.
|
||||
Visualize ranking results
|
||||
---------------------------
|
||||
This can be achieved by setting ``visrank`` to true in ``engine.run()``. ``visrank_topk`` determines the top-k images to be visualized (Default is ``visrank_topk=10``). Note that ``visrank`` can only be used in test mode, i.e. ``test_only=True`` in ``engine.run()``. The output will be saved under ``save_dir/visrank_DATASETNAME`` where each plot contains the top-k similar gallery images given a query. An example is shown below where red and green denote incorrect and correct matches respectively.
|
||||
|
||||
.. image:: figures/ranked_results.jpg
|
||||
.. image:: figures/ranking_results.jpg
|
||||
:width: 800px
|
||||
:align: center
|
||||
|
||||
|
|
Loading…
Reference in New Issue