Torchreid: Deep learning person re-identification in PyTorch.
 
 
 
 
Go to file
KaiyangZhou fa1ba75eb0 update readme: add resnet50mid 2018-11-09 16:17:04 +00:00
imgs add img 2018-08-01 12:05:11 +01:00
torchreid check if isinstance(model, nn.DataParallel) 2018-11-08 23:54:26 +00:00
.gitignore test.py -> debug.py 2018-11-08 21:43:36 +00:00
DATASETS.md update DATASETS.md 2018-11-09 13:54:46 +00:00
LICENSE Create LICENSE 2018-03-12 22:37:26 +00:00
MODEL_ZOO.md update model zoo 2018-11-09 14:06:01 +00:00
README.md update readme: add resnet50mid 2018-11-09 16:17:04 +00:00
RELATED_PROJECTS.md add RELATED_PROJECTS.md 2018-10-09 10:14:45 +01:00
args.py organize args for test settings 2018-11-09 13:54:18 +00:00
requirements.txt update requirements: torch>=0.4.1 2018-11-04 19:06:17 +00:00
train_imgreid_xent.py add args.fixbase 2018-11-09 00:11:11 +00:00
train_imgreid_xent_htri.py add args.fixbase 2018-11-09 00:11:11 +00:00
train_vidreid_xent.py add args.fixbase 2018-11-09 00:11:11 +00:00
train_vidreid_xent_htri.py add args.fixbase 2018-11-09 00:11:11 +00:00

README.md

deep-person-reid

PyTorch implementation of deep person re-identification models.

We support

  • multi-GPU training.
  • both image-based and video-based reid.
  • standard dataset splits used by most papers.
  • unified interface for different reid models.
  • easy dataset preparation.
  • end-to-end training and evaluation.
  • fast cython-based evaluation.
  • multi-dataset training.
  • visualization of ranked results.
  • state-of-the-art reid models.

Updates

  • xx-11-2018: xxx.

Get started

  1. cd to the folder where you want to download this repo.
  2. Run git clone https://github.com/KaiyangZhou/deep-person-reid.
  3. Install dependencies by pip install -r requirements.txt (if necessary).
  4. To accelerate evaluation (10x faster), you can use cython-based evaluation code (developed by luzai). First cd to eval_lib, then do make or python setup.py build_ext -i. After that, run python test_cython_eval.py to test if the package is successfully installed.

Datasets

Image reid datasets:

Video reid datasets:

Instructions regarding how to prepare (and do evaluation on) these datasets can be found in DATASETS.md.

Models

ImageNet classification models

Lightweight models

ReID-specific models

In the MODEL_ZOO, we provide pretrained models and the training scripts to reproduce the results.

Losses

Tutorial

Train

Training methods are implemented in

  • train_imgreid_xent.py: train image-reid models with cross entropy loss.
  • train_imgreid_xent_htri.py: train image-reid models with hard mining triplet loss or the combination of hard mining triplet loss and cross entropy loss.
  • train_imgreid_xent.py: train video-reid models with cross entropy loss.
  • train_imgreid_xent_htri.py: train video-reid models with hard mining triplet loss or the combination of hard mining triplet loss and cross entropy loss.

Input arguments for the above training scripts are unified in args.py.

To train an image-reid model with cross entropy loss, you can do

python train_imgreid_xent.py \
-s market1501 \ # source dataset for training
-t market1501 \ # target dataset for test
--height 256 \ # image height
--width 128 \ # image width
--optim amsgrad \ # optimizer
--label-smooth \ # label smoothing regularizer
--lr 0.0003 \ # learning rate
--max-epoch 60 \ # maximum epoch to run
--stepsize 20 40 \ # stepsize for learning rate decay
--train-batch-size 32 \
--test-batch-size 100 \
-a resnet50 \ # network architecture
--save-dir log/resnet50-market-xent \ # where to save the log and models
--gpu-devices 0 \ # gpu device index

Multi-dataset training

-s and -t can take different strings of arbitrary length (delimited by space). For example, if you wanna train models on Market1501 + DukeMTMC-reID and test on both of them, you can use -s market1501 dukemtmcreid and -t market1501 dukemtmcreid. If say, you wanna test on a different dataset, e.g. MSMT17, then just do -t msmt17. Multi-dataset training is implemented for both image-reid and video-reid. Note that when -t takes multiple datasets, evaluation is performed on each dataset individually.

Two-stepped transfer learning

To finetune models pretrained on external large-scale datasets such as ImageNet, the two-stepped training strategy is useful. First, you freeze the base network and only train the randomly initialized layers (e.g. last linear classifier) for --fixbase-epoch epochs. Only the layers that are specified by --open-layers are set to the train mode and are allowed to be updated while other layers are frozen and set to the eval mode. Second, after the new layers are adapted to the old layers, all layers are opened to train for --max-epoch epochs.

For example, to train the resnet50 with a classifier being initialized randomly, you can set --fixbase-epoch 5 and --open-layers classifier. The layer names must align with the attribute names in the model, i.e. self.classifier exists in the model. See open_specified_layers(model, open_layers) in torchreid/utils/torchtools.py for more details.

Using hard mining triplet loss

htri requires adding --train-sampler RandomIdentitySampler.

Training video-reid models

For video reid, test-batch-size refers to the number of tracklets, so the real image batch size is --test-batch-size * --seq-len.

Test

Evaluation mode

Use --evaluate to switch to the evaluation mode. In doing so, no model training is performed. For example, you wanna load model weights at path_to/resnet50.pth.tar for resnet50 and do evaluation on Market1501, you can do

python train_imgreid_xent.py \
-s market1501 \ # this does not matter any more
-t market1501 \ # you can add more datasets in the test list
--height 256 \
--width 128 \
--test-batch-size 100 \
--evaluate \
-a resnet50 \
--load-weights path_to/resnet50.pth.tar \
--save-dir log/resnet50-eval
--gpu-devices 0 \

Note that --load-weights will discard layer weights that do not match the model layers in size.

Visualize ranked results

Ranked results can be visualized via --visualize-ranks, which works along with --evaluate. Ranked images will be saved in save_dir/ranked_results where save_dir is the directory you specify with --save-dir.

train

Misc

Citation

Please link this project in your paper.

License

This project is under the MIT License.