SOTA Re-identification Methods and Toolbox
 
 
 
 
 
Go to file
liaoxingyu aa4c8eee2c update code 2019-08-20 09:36:47 +08:00
config update code 2019-08-20 09:36:47 +08:00
configs update code 2019-08-20 09:36:47 +08:00
csrc/eval_cylib code 2019-07-29 20:57:19 +08:00
data update code 2019-08-20 09:36:47 +08:00
engine update code 2019-08-20 09:36:47 +08:00
layers update code 2019-08-20 09:36:47 +08:00
modeling update code 2019-08-20 09:36:47 +08:00
scripts update code 2019-08-20 09:36:47 +08:00
solver update code 2019-08-20 09:36:47 +08:00
tests update code 2019-08-20 09:36:47 +08:00
tools update code 2019-08-20 09:36:47 +08:00
utils update code 2019-08-20 09:36:47 +08:00
.gitignore update code 2019-08-20 09:36:47 +08:00
README.md update code 2019-08-20 09:36:47 +08:00
datasets update readme 2019-08-15 08:59:24 +08:00
demo.py release ready 2019-08-14 14:50:44 +08:00
vis_data.ipynb update code 2019-08-20 09:36:47 +08:00

README.md

ReID_baseline

A strong baseline (state-of-the-art) for person re-identification.

We support

  • easy dataset preparation
  • end-to-end training and evaluation
  • multi-GPU distributed training
  • fast training speed with fp16
  • fast evaluation with cython
  • support both image and video reid
  • multi-dataset training
  • cross-dataset evaluation
  • high modular management
  • state-of-the-art performance with simple model
  • high efficient backbone
  • advanced training techniques
  • various loss functions
  • tensorboard visualization

Get Started

The designed architecture follows this guide PyTorch-Project-Template, you can check each folder's purpose by yourself.

  1. cd to folder where you want to download this repo

  2. Run git clone https://github.com/L1aoXingyu/reid_baseline.git

  3. Install dependencies:

  4. Prepare dataset

    Create a directory to store reid datasets under this repo via

    cd reid_baseline
    mkdir datasets
    
    1. Download dataset to datasets/ from baidu pan or google driver
    2. Extract dataset. The dataset structure would like:
    datasets
        Market-1501-v15.09.15
            bounding_box_test/
            bounding_box_train/
    
  5. Prepare pretrained model. If you use origin ResNet, you do not need to do anything. But if you want to use ResNet_ibn, you need to download pretrain model in here. And then you can put it in ~/.cache/torch/checkpoints or anywhere you like.

    Then you should set this pretrain model path in configs/softmax_triplet.yml.

  6. compile with cython to accelerate evalution

    cd csrc/eval_cylib; make
    

Train

Most of the configuration files that we provide, you can run this command for training market1501

bash scripts/train_market.sh

Or you can just run code below to modify your cfg parameters

python3 tools/train.py -cfg='configs/softmax.yml' INPUT.SIZE_TRAIN '(256, 128)' INPUT.SIZE_TEST '(256, 128)'

Test

You can test your model's performance directly by running this command

python3 tools/test.py --config_file='configs/softmax.yml' TEST.WEIGHT '/save/trained_model/path'

Results

cfg market1501 dukemtmc
softmax+triplet, size=(256, 128), batch_size=64(16 id x 4 imgs) 93.9 (85.9) 86.5 (75.9)