SOTA Re-identification Methods and Toolbox
 
 
 
 
 
Go to file
liaoxingyu 3d5f7d24aa 1. fix minor bug
2. update experiment results in readme
2019-08-21 09:35:34 +08:00
config 1. fix minor bug 2019-08-21 09:35:34 +08:00
configs 1. fix minor bug 2019-08-21 09:35:34 +08:00
csrc/eval_cylib code 2019-07-29 20:57:19 +08:00
data 1. fix minor bug 2019-08-21 09:35:34 +08:00
engine update code 2019-08-20 09:36:47 +08:00
layers 1. fix minor bug 2019-08-21 09:35:34 +08:00
modeling 1. fix minor bug 2019-08-21 09:35:34 +08:00
scripts 1. fix minor bug 2019-08-21 09:35:34 +08:00
solver update code 2019-08-20 09:36:47 +08:00
tests update code 2019-08-20 09:36:47 +08:00
tools 1. fix minor bug 2019-08-21 09:35:34 +08:00
utils update code 2019-08-20 09:36:47 +08:00
.gitignore update code 2019-08-20 09:36:47 +08:00
README.md 1. fix minor bug 2019-08-21 09:35:34 +08:00
datasets update readme 2019-08-15 08:59:24 +08:00
demo.py release ready 2019-08-14 14:50:44 +08:00
vis_data.ipynb update code 2019-08-20 09:36:47 +08:00

README.md

ReID_baseline

A strong baseline (state-of-the-art) for person re-identification.

We support

  • easy dataset preparation
  • end-to-end training and evaluation
  • multi-GPU distributed training
  • fast training speed with fp16
  • fast evaluation with cython
  • support both image and video reid
  • multi-dataset training
  • cross-dataset evaluation
  • high modular management
  • state-of-the-art performance with simple model
  • high efficient backbone
  • advanced training techniques
  • various loss functions
  • tensorboard visualization

Get Started

The designed architecture follows this guide PyTorch-Project-Template, you can check each folder's purpose by yourself.

  1. cd to folder where you want to download this repo

  2. Run git clone https://github.com/L1aoXingyu/reid_baseline.git

  3. Install dependencies:

  4. Prepare dataset

    Create a directory to store reid datasets under this repo via

    cd reid_baseline
    mkdir datasets
    
    1. Download dataset to datasets/ from baidu pan or google driver
    2. Extract dataset. The dataset structure would like:
    datasets
        Market-1501-v15.09.15
            bounding_box_test/
            bounding_box_train/
    
  5. Prepare pretrained model. If you use origin ResNet, you do not need to do anything. But if you want to use ResNet_ibn, you need to download pretrain model in here. And then you can put it in ~/.cache/torch/checkpoints or anywhere you like.

    Then you should set this pretrain model path in configs/softmax_triplet.yml.

  6. compile with cython to accelerate evalution

    cd csrc/eval_cylib; make
    

Train

Most of the configuration files that we provide, you can run this command for training market1501

bash scripts/train_market.sh

Or you can just run code below to modify your cfg parameters

python3 tools/train.py -cfg='configs/softmax.yml' INPUT.SIZE_TRAIN '(256, 128)' INPUT.SIZE_TEST '(256, 128)'

Test

You can test your model's performance directly by running this command

python3 tools/test.py --config_file='configs/softmax.yml' TEST.WEIGHT '/save/trained_model/path'

Experiment Results

size=(256, 128) batch_size=64 (16 id x 4 imgs)
softmax? ✔︎ ✔︎ ✔︎ ✔︎ ✔︎
triplet? ✔︎ ✔︎ ✔︎
ibn? ✔︎ ✔︎ ✔︎
gcnet? ✔︎
Market1501 93.4 (82.9) 94.2 (86.1) 93.3 (84.3) 94.9 (86.4) -
DukeMTMC-reid 84.7 (72.7) 87.3 (76.0) 86.7 (74.9) 87.9 (77.1) -
CUHK03

🔥Any other tricks are welcomed