deep-person-reid/README.md

165 lines
9.2 KiB
Markdown
Raw Normal View History

2018-11-10 02:06:16 +08:00
<p align="center">
2018-11-10 02:08:10 +08:00
<img src="imgs/deep-person-reid-logo.png" alt="logo" width="260">
2018-11-10 02:06:16 +08:00
</p>
2018-03-12 18:29:35 +08:00
2018-11-10 01:17:51 +08:00
## Introduction
2018-11-10 05:58:43 +08:00
Deep-person-reid is a [pytorch](http://pytorch.org/)-based framework for training and evaluating deep person re-identification models on reid benchmarks.
2018-11-10 01:17:51 +08:00
It has the following features:
2018-04-23 05:12:55 +08:00
- multi-GPU training.
2018-11-10 01:17:51 +08:00
- both image reid and video reid.
- standard dataset splits used by most research papers.
- incredibly easy preparation of reid datasets.
- implementations of state-of-the-art reid models.
2018-04-24 00:12:24 +08:00
- end-to-end training and evaluation.
2018-11-09 21:54:29 +08:00
- multi-dataset training.
- visualization of ranked results.
2018-11-10 01:17:51 +08:00
- state-of-the-art training techniques.
2018-03-22 21:56:04 +08:00
2018-11-09 21:54:29 +08:00
## Updates
2018-11-13 00:58:58 +08:00
- 11-11-2018 (**New**): Added multi-dataset training; Added cython code for cuhk03-style evaliation; Wrapped dataloader construction to Image/Video-DataManager; Wrapped argparse to [args.py](args.py); Added [MLFN (CVPR'18)](https://arxiv.org/abs/1803.09132).
2018-03-22 21:56:04 +08:00
2018-11-10 05:58:43 +08:00
## Installation
1. Run `git clone https://github.com/KaiyangZhou/deep-person-reid`.
2. Install dependencies by `pip install -r requirements.txt` (if necessary).
2018-11-11 06:01:08 +08:00
3. To install the cython-based evaluation toolbox, `cd` to `torchreid/eval_cylib` and do `make`. As a result, `eval_metrics_cy.so` is generated under the same folder. Run `python test_cython.py` to test if the toolbox is installed successfully. (credit to [luzai](https://github.com/luzai))
2018-06-04 17:26:54 +08:00
2018-07-06 18:02:32 +08:00
## Datasets
2018-11-10 01:07:29 +08:00
Image-reid datasets:
2018-11-10 19:49:34 +08:00
- [Market1501](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zheng_Scalable_Person_Re-Identification_ICCV_2015_paper.pdf) (`market1501`)
- [CUHK03](https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Li_DeepReID_Deep_Filter_2014_CVPR_paper.pdf) (`cuhk03`)
- [DukeMTMC-reID](https://arxiv.org/abs/1701.07717) (`dukemtmcreid`)
- [MSMT17](https://arxiv.org/abs/1711.08565) (`msmt17`)
- [VIPeR](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.331.7285&rep=rep1&type=pdf) (`viper`)
- [GRID](http://www.eecs.qmul.ac.uk/~txiang/publications/LoyXiangGong_cvpr_2009.pdf) (`grid`)
- [CUHK01](http://www.ee.cuhk.edu.hk/~xgwang/papers/liZWaccv12.pdf) (`cuhk01`)
- [PRID450S](https://pdfs.semanticscholar.org/f62d/71e701c9fd021610e2076b5e0f5b2c7c86ca.pdf) (`prid450s`)
- [SenseReID](http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Spindle_Net_Person_CVPR_2017_paper.pdf) (`sensereid`)
2018-07-06 18:02:32 +08:00
2018-11-10 01:07:29 +08:00
Video-reid datasets:
2018-11-10 19:49:34 +08:00
- [MARS](http://www.liangzheng.org/1320.pdf) (`mars`)
- [iLIDS-VID](https://www.eecs.qmul.ac.uk/~sgg/papers/WangEtAl_ECCV14.pdf) (`ilidsvid`)
- [PRID2011](https://pdfs.semanticscholar.org/4c1b/f0592be3e535faf256c95e27982db9b3d3d3.pdf) (`prid2011`)
- [DukeMTMC-VideoReID](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wu_Exploit_the_Unknown_CVPR_2018_paper.pdf) (`dukemtmcvidreid`)
The keys to use these datasets are enclosed in the parentheses. See [torchreid/datasets/\_\_init__.py](torchreid/datasets/__init__.py) for details. The data managers of image reid and video reid are implemented in [torchreid/data_manager.py](torchreid/data_manager.py).
2018-07-06 18:02:32 +08:00
2018-11-09 21:54:29 +08:00
Instructions regarding how to prepare (and do evaluation on) these datasets can be found in [DATASETS.md](DATASETS.md).
2018-06-04 17:26:54 +08:00
2018-03-12 22:33:52 +08:00
## Models
2018-11-09 21:54:29 +08:00
### ImageNet classification models
- [ResNet](https://arxiv.org/abs/1512.03385)
- [ResNeXt](https://arxiv.org/abs/1611.05431)
- [SENet](https://arxiv.org/abs/1709.01507)
- [DenseNet](https://arxiv.org/abs/1608.06993)
- [Inception-ResNet-V2](https://arxiv.org/abs/1602.07261)
- [Inception-V4](https://arxiv.org/abs/1602.07261)
- [Xception](https://arxiv.org/abs/1610.02357)
### Lightweight models
- [NASNet](https://arxiv.org/abs/1707.07012)
- [MobileNetV2](https://arxiv.org/abs/1801.04381)
- [ShuffleNet](https://arxiv.org/abs/1707.01083)
- [SqueezeNet](https://arxiv.org/abs/1602.07360)
### ReID-specific models
- [MuDeep](https://arxiv.org/abs/1709.05165)
2018-11-10 00:17:04 +08:00
- [ResNet-mid](https://arxiv.org/abs/1711.08106)
2018-11-09 21:54:29 +08:00
- [HACNN](https://arxiv.org/abs/1802.08122)
- [PCB](https://arxiv.org/abs/1711.09349)
- [MLFN](https://arxiv.org/abs/1803.09132)
2018-11-10 19:49:34 +08:00
Please refer to [torchreid/models/\_\_init__.py](torchreid/models/__init__.py) for the keys to build these models. In the [MODEL_ZOO](MODEL_ZOO.md), we provide pretrained model weights and the training scripts to reproduce the results.
2018-11-09 21:54:29 +08:00
## Losses
2018-11-10 05:58:43 +08:00
- `xent`: cross entropy loss (enable the [label smoothing regularizer](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf) by `--label-smooth`).
2018-11-09 21:54:29 +08:00
- `htri`: [hard mining triplet loss](https://arxiv.org/abs/1703.07737).
## Tutorial
### Train
Training methods are implemented in
- `train_imgreid_xent.py`: train image-reid models with cross entropy loss.
- `train_imgreid_xent_htri.py`: train image-reid models with hard mining triplet loss or the combination of hard mining triplet loss and cross entropy loss.
- `train_imgreid_xent.py`: train video-reid models with cross entropy loss.
- `train_imgreid_xent_htri.py`: train video-reid models with hard mining triplet loss or the combination of hard mining triplet loss and cross entropy loss.
Input arguments for the above training scripts are unified in [args.py](args.py).
To train an image-reid model with cross entropy loss, you can do
2018-03-13 06:31:39 +08:00
```bash
2018-11-09 21:54:29 +08:00
python train_imgreid_xent.py \
-s market1501 \ # source dataset for training
-t market1501 \ # target dataset for test
--height 256 \ # image height
--width 128 \ # image width
--optim amsgrad \ # optimizer
--label-smooth \ # label smoothing regularizer
--lr 0.0003 \ # learning rate
--max-epoch 60 \ # maximum epoch to run
--stepsize 20 40 \ # stepsize for learning rate decay
--train-batch-size 32 \
--test-batch-size 100 \
-a resnet50 \ # network architecture
--save-dir log/resnet50-market-xent \ # where to save the log and models
--gpu-devices 0 \ # gpu device index
2018-03-12 22:33:52 +08:00
```
2018-11-09 21:54:29 +08:00
#### Multi-dataset training
`-s` and `-t` can take different strings of arbitrary length (delimited by space). For example, if you wanna train models on Market1501 + DukeMTMC-reID and test on both of them, you can use `-s market1501 dukemtmcreid` and `-t market1501 dukemtmcreid`. If say, you wanna test on a different dataset, e.g. MSMT17, then just do `-t msmt17`. Multi-dataset training is implemented for both image-reid and video-reid. Note that when `-t` takes multiple datasets, evaluation is performed on each dataset individually.
2018-03-16 23:08:51 +08:00
2018-11-09 21:54:29 +08:00
#### Two-stepped transfer learning
2018-11-10 01:03:28 +08:00
To finetune models pretrained on external large-scale datasets such as [ImageNet](http://www.image-net.org/), the [two-stepped training strategy](https://arxiv.org/abs/1611.05244) is useful.
2018-07-26 00:59:41 +08:00
2018-11-10 01:03:28 +08:00
First, the base network is frozen and only the randomly initialized layers (e.g. identity classification layer) are trained for `--fixbase-epoch` epochs. Specifically, the layers specified by `--open-layers` are set to the **train** mode and will be updated, while other layers are set to the **eval** mode and are frozen. See `open_specified_layers(model, open_layers)` in [torchreid/utils/torchtools.py](torchreid/utils/torchtools.py).
Second, after the new layers are adapted to the old layers, all layers are set to the **train** mode and are trained for `--max-epoch` epochs. See `open_all_layers(model)` in [torchreid/utils/torchtools.py](torchreid/utils/torchtools.py)
For example, to train the [resnet50](torchreid/models/resnet.py) with a `classifier` being initialized randomly, you can set `--fixbase-epoch 5` and `--open-layers classifier`. The layer names must align with the attribute names in the model, i.e. `self.classifier` exists in the model.
2018-03-12 19:04:39 +08:00
2018-11-09 21:54:29 +08:00
#### Using hard mining triplet loss
`htri` requires adding `--train-sampler RandomIdentitySampler`.
2018-04-02 01:08:50 +08:00
2018-11-09 21:54:29 +08:00
#### Training video-reid models
For video reid, `test-batch-size` refers to the number of tracklets, so the real image batch size is `--test-batch-size * --seq-len`.
### Test
2018-03-12 23:04:04 +08:00
2018-11-09 21:54:29 +08:00
#### Evaluation mode
2018-11-10 05:58:43 +08:00
Use `--evaluate` to switch to the evaluation mode. In doing so, no model training is performed. For example, say you wanna load model weights at `path_to/resnet50.pth.tar` for `resnet50` and do evaluation on Market1501, you can do
2018-03-13 06:31:39 +08:00
```bash
2018-11-09 21:54:29 +08:00
python train_imgreid_xent.py \
-s market1501 \ # this does not matter any more
2018-11-10 05:58:43 +08:00
-t market1501 \ # you can add more datasets here for the test list
2018-11-09 21:54:29 +08:00
--height 256 \
--width 128 \
--test-batch-size 100 \
--evaluate \
-a resnet50 \
--load-weights path_to/resnet50.pth.tar \
2018-11-10 05:58:43 +08:00
--save-dir log/eval-resnet50 \
2018-11-09 21:54:29 +08:00
--gpu-devices 0 \
2018-03-13 06:23:50 +08:00
```
2018-08-01 19:07:45 +08:00
Note that `--load-weights` will discard layer weights in `path_to/resnet50.pth.tar` that do not match the original model layers in size. If you encounter the `UnicodeDecodeError` when loading checkpoints, please try [this solution](https://github.com/KaiyangZhou/deep-person-reid/issues/43#issuecomment-411266053).
2018-11-10 05:58:43 +08:00
#### Evaluation frequency
Use `--eval-freq` to control the evaluation frequency and `--start-eval` to indicate when to start counting the evaluation frequency.
2018-08-01 19:07:45 +08:00
2018-11-09 21:54:29 +08:00
#### Visualize ranked results
2018-11-10 01:03:28 +08:00
Ranked results can be visualized via `--visualize-ranks`, which works along with `--evaluate`. Ranked images will be saved in `save_dir/ranked_results` where `save_dir` is the directory you specify with `--save-dir`. This function is implemented in [torchreid/utils/reidtools.py](torchreid/utils/reidtools.py).
2018-08-01 19:04:36 +08:00
2018-11-10 02:06:16 +08:00
<p align="center">
2018-11-10 02:08:10 +08:00
<img src="imgs/ranked_results.jpg" alt="ranked_results" width="600">
2018-11-10 02:06:16 +08:00
</p>
2018-08-01 19:04:36 +08:00
2018-08-01 19:07:45 +08:00
2018-10-09 17:18:36 +08:00
## Misc
2018-11-09 21:54:29 +08:00
- [Related person ReID projects](RELATED_PROJECTS.md).
2018-10-09 17:18:36 +08:00
2018-08-01 19:07:45 +08:00
2018-07-06 18:02:32 +08:00
## Citation
Please link this project in your paper.
2018-03-27 19:12:13 +08:00
2018-11-09 21:54:29 +08:00
## License
This project is under the [MIT License](LICENSE).