rm deprecated files
parent
ad0f534cec
commit
4332b3c87a
|
@ -1,206 +0,0 @@
|
|||
# How to prepare data
|
||||
|
||||
Create a directory to store reid datasets under this repo via
|
||||
```bash
|
||||
cd deep-person-reid/
|
||||
mkdir data/
|
||||
```
|
||||
|
||||
Note that
|
||||
- please follow the instructions below to prepare each dataset. After that, you can simply use pre-defined keys to build the datasets, e.g. `-s market1501` (use Market1501 as the training dataset).
|
||||
- please do not assign image-reid's dataset keys to video-reid's training scripts, otherwise error would occur, and vice versa. (see [torchreid/data_manager.py](torchreid/data_manager.py))
|
||||
- please use the suggested names for the dataset folders, otherwise you have to modify the `dataset_dir` attribute in the specific `dataset.py` file in `torchreid/datasets/` accordingly.
|
||||
- if you find any errors/bugs, please report in the Issues section.
|
||||
- in the following, we assume that the path to the dataset directory is `data/`. However, you can store datasets in whatever location you want, all you need is to specify the root path with `--root path/to/your/data`.
|
||||
|
||||
## Image ReID
|
||||
|
||||
**Market1501**:
|
||||
- Download the dataset to `data/` from http://www.liangzheng.org/Project/project_reid.html.
|
||||
- Extract the file and rename it to `market1501`. The data structure should look like:
|
||||
```
|
||||
market1501/
|
||||
bounding_box_test/
|
||||
bounding_box_train/
|
||||
...
|
||||
```
|
||||
- Use `market1501` as the key to load Market1501.
|
||||
- To use the extra 500K distractors (i.e. Market1501 + 500K), go to the **Market-1501+500k Dataset** section at http://www.liangzheng.org/Project/project_reid.html, download the zip file (`distractors_500k.zip`), and extract it under `market1501/`. As a result, you will have a folder named `images/`. Use `--market1501-500k` to add these extra images to the gallery set when running the code.
|
||||
|
||||
**CUHK03**:
|
||||
- Create a folder named `cuhk03/` under `data/`.
|
||||
- Download the dataset to `data/cuhk03/` from http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html and extract `cuhk03_release.zip`, so you will have `data/cuhk03/cuhk03_release`.
|
||||
- Download the new split (767/700) from [person-re-ranking](https://github.com/zhunzhong07/person-re-ranking/tree/master/evaluation/data/CUHK03). What you need are `cuhk03_new_protocol_config_detected.mat` and `cuhk03_new_protocol_config_labeled.mat`; put these two mat files under `data/cuhk03`. Finally, the data structure should look like
|
||||
```
|
||||
cuhk03/
|
||||
cuhk03_release/
|
||||
cuhk03_new_protocol_config_detected.mat
|
||||
cuhk03_new_protocol_config_labeled.mat
|
||||
...
|
||||
```
|
||||
- Use `cuhk03` as the dataset key. In the default mode, we load data using the new split (767/700). If you wanna use the original (20) splits (1367/100), please specify with `--cuhk03-classic-split`. As the CMC is computed differently from Market1501 for the 1367/100 split (see [here](http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html)), you need to specify `--use-metric-cuhk03` to activate the *single-gallery-shot* metric for fair comparison with some methods that adopt the old splits (do not need to report `mAP`). In addition, we support both `labeled` and `detected` modes. The default mode loads `detected` images. Specify `--cuhk03-labeled` if you wanna train and test on `labeled` images.
|
||||
|
||||
|
||||
**DukeMTMC-reID**:
|
||||
- The process is automated so you can simply do `-s dukemtmcreid -t dukemtmcreid`. The final folder structure looks like
|
||||
```
|
||||
dukemtmc-reid/
|
||||
DukeMTMC-reid.zip # (you can delete this zip file, it is ok)
|
||||
DukeMTMC-reid/
|
||||
```
|
||||
|
||||
|
||||
**MSMT17**:
|
||||
- Create a directory named `msmt17/` under `data/`.
|
||||
- Download the dataset (e.g. `MSMT17_V1.tar.gz`) from http://www.pkuvmc.com/publications/msmt17.html to `data/msmt17/`. Extract the file under the same folder, you need to have
|
||||
```
|
||||
msmt17/
|
||||
MSMT17_V1/ # different versions might differ in folder name
|
||||
train/
|
||||
test/
|
||||
list_train.txt
|
||||
list_query.txt
|
||||
list_gallery.txt
|
||||
list_val.txt
|
||||
```
|
||||
- Use `msmt17` as the key for this dataset.
|
||||
|
||||
**VIPeR**:
|
||||
- The code supports automatic download and formatting. Just use `-s viper -t viper`. The final data structure would look like:
|
||||
```
|
||||
viper/
|
||||
VIPeR/
|
||||
VIPeR.v1.0.zip # useless
|
||||
splits.json
|
||||
```
|
||||
|
||||
**GRID**:
|
||||
- The code supports automatic download and formatting. Just use `-s grid -t grid`. The final data structure would look like:
|
||||
```
|
||||
grid/
|
||||
underground_reid/
|
||||
underground_reid.zip # useless
|
||||
splits.json
|
||||
```
|
||||
|
||||
**CUHK01**:
|
||||
- Create `cuhk01/` under `data/`.
|
||||
- Download `CUHK01.zip` from http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html and place it in `cuhk01/`.
|
||||
- Do `-s cuhk01 -t cuhk01` to use the data.
|
||||
|
||||
|
||||
**PRID450S**:
|
||||
- The code supports automatic download and formatting. Just use `-s prid450s -t prid450s`. The final data structure would look like:
|
||||
```
|
||||
prid450s/
|
||||
cam_a/
|
||||
cam_b/
|
||||
readme.txt
|
||||
splits.json
|
||||
```
|
||||
|
||||
**QMUL-iLIDS**
|
||||
- The code can automate download and formatting. The key to use this dataset is `-s ilids -t ilids`. The final data structure would look like:
|
||||
```
|
||||
ilids/
|
||||
i-LIDS_Pedestrian/
|
||||
Persons/
|
||||
```
|
||||
|
||||
**PRID**
|
||||
- Under `data/`, do `mkdir prid2011` to create a directory.
|
||||
- Download the dataset from https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/PRID11/ and extract it under `data/prid2011`.
|
||||
- The data structure would look like
|
||||
```
|
||||
prid2011/
|
||||
prid_2011/
|
||||
single_shot/
|
||||
multi_shot/
|
||||
readme.txt
|
||||
```
|
||||
- Use `-s prid -t prid` to build the dataset.
|
||||
|
||||
|
||||
**SenseReID**:
|
||||
- Create `sensereid/` under `data/`.
|
||||
- Download the dataset from this [link](https://drive.google.com/file/d/0B56OfSrVI8hubVJLTzkwV2VaOWM/view) and extract to `sensereid/`. The final folder structure should look like
|
||||
```
|
||||
sensereid/
|
||||
SenseReID/
|
||||
test_probe/
|
||||
test_gallery/
|
||||
```
|
||||
- The command for using SenseReID is `-t sensereid`. Note that SenseReID is for test purpose only so training images are unavailable. Please use `--evaluate` along with `-t sensereid`.
|
||||
|
||||
|
||||
## Video ReID
|
||||
|
||||
**MARS**:
|
||||
- Create a directory named `mars/` under `data/`.
|
||||
- Download the dataset to `data/mars/` from http://www.liangzheng.com.cn/Project/project_mars.html.
|
||||
- Extract `bbox_train.zip` and `bbox_test.zip`.
|
||||
- Download the split metadata from https://github.com/liangzheng06/MARS-evaluation/tree/master/info and put `info/` in `data/mars` (we want to follow the standard split). The data structure should look like:
|
||||
```
|
||||
mars/
|
||||
bbox_test/
|
||||
bbox_train/
|
||||
info/
|
||||
```
|
||||
- Use `mars` as the dataset key.
|
||||
|
||||
**iLIDS-VID**:
|
||||
- The code supports automatic download and formatting. Simple use `-s ilidsvid -t ilidsvid`. The data structure would look like:
|
||||
```
|
||||
ilids-vid/
|
||||
i-LIDS-VID/
|
||||
train-test people splits/
|
||||
splits.json
|
||||
```
|
||||
|
||||
**PRID-2011**:
|
||||
- Under `data/`, do `mkdir prid2011` to create a directory.
|
||||
- Download the dataset from https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/PRID11/ and extract it under `data/prid2011`.
|
||||
- Download the split created by [iLIDS-VID](http://www.eecs.qmul.ac.uk/~xiatian/downloads_qmul_iLIDS-VID_ReID_dataset.html) from [here](http://www.eecs.qmul.ac.uk/~kz303/deep-person-reid/datasets/prid2011/splits_prid2011.json), and put it under `data/prid2011/`. Note that only 178 persons whose sequences are more than a threshold are used so that results on this dataset can be fairly compared with other approaches. The data structure would look like:
|
||||
```
|
||||
prid2011/
|
||||
splits_prid2011.json
|
||||
prid_2011/
|
||||
multi_shot/
|
||||
single_shot/
|
||||
readme.txt
|
||||
```
|
||||
- Use `-s prid2011 -t prid2011` when running the training code.
|
||||
|
||||
**DukeMTMC-VideoReID**:
|
||||
- Use `-s dukemtmcvidreid -t dukemtmcvidreid` directly.
|
||||
- If you wanna download the dataset manually, get `DukeMTMC-VideoReID.zip` from https://github.com/Yu-Wu/DukeMTMC-VideoReID. Unzip the file to `data/dukemtmc-vidreid`. Ultimately, you need to have
|
||||
```
|
||||
dukemtmc-vidreid/
|
||||
DukeMTMC-VideoReID/
|
||||
train/ # essential
|
||||
query/ # essential
|
||||
gallery/ # essential
|
||||
... (and license files)
|
||||
```
|
||||
|
||||
|
||||
# Dataset loaders
|
||||
These are implemented in `dataset_loader.py` where we have two main classes that subclass [torch.utils.data.Dataset](http://pytorch.org/docs/master/_modules/torch/utils/data/dataset.html#Dataset):
|
||||
* [ImageDataset](torchreid/dataset_loader.py): processes image-based person reid datasets.
|
||||
* [VideoDataset](torchreid/dataset_loader.py): processes video-based person reid datasets.
|
||||
|
||||
These two classes are used for [torch.utils.data.DataLoader](http://pytorch.org/docs/master/_modules/torch/utils/data/dataloader.html#DataLoader) that can provide batched data. The data loader wich `ImageDataset` will output batch data of size `(batch, channel, height, width)`, while the data loader with `VideoDataset` will output batch data of size `(batch, sequence, channel, height, width)`.
|
||||
|
||||
|
||||
# Evaluation
|
||||
## Image ReID
|
||||
- **Market1501**, **DukeMTMC-reID**, **CUHK03 (767/700 split)** and **MSMT17** have fixed split so keeping `split_id=0` is fine.
|
||||
- **CUHK03 (classic split)** has 20 fixed splits, so do `split_id=0~19`.
|
||||
- **VIPeR** contains 632 identities each with 2 images under two camera views. Evaluation should be done for 10 random splits. Each split randomly divides 632 identities to 316 train ids (632 images) and the other 316 test ids (632 images). Note that, in each random split, there are two sub-splits, one using camera-A as query and camera-B as gallery while the other one using camera-B as query and camera-A as gallery. Thus, there are totally 20 splits generated with `split_id` starting from 0 to 19. Models can be trained on `split_id=[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]` (because `split_id=0` and `split_id=1` share the same train set, and so on and so forth.). At test time, models trained on `split_id=0` can be directly evaluated on `split_id=1`, models trained on `split_id=2` can be directly evaluated on `split_id=3`, and so on and so forth.
|
||||
- **CUHK01** is similar to VIPeR in the split generation.
|
||||
- **GRID** , **PRID450S**, **iLIDS** and **PRID** have 10 random splits, so evaluation should be done by varying `split_id` from 0 to 9.
|
||||
- **SenseReID** has no training images and is used for evaluation only.
|
||||
|
||||
## Video ReID
|
||||
- **MARS** and **DukeMTMC-VideoReID** have fixed single split so using `-s dataset_name -t dataset_name` and `split_id=0` is ok.
|
||||
- **iLIDS-VID** and **PRID2011** have 10 predefined splits so evaluation should be done by varying `split_id` from 0 to 9.
|
|
@ -1,267 +0,0 @@
|
|||
import argparse
|
||||
|
||||
|
||||
def argument_parser():
|
||||
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
|
||||
|
||||
# ************************************************************
|
||||
# Datasets (general)
|
||||
# ************************************************************
|
||||
parser.add_argument('--root', type=str, default='data',
|
||||
help='root path to data directory')
|
||||
parser.add_argument('-s', '--source-names', type=str, required=True, nargs='+',
|
||||
help='source datasets (delimited by space)')
|
||||
parser.add_argument('-t', '--target-names', type=str, required=True, nargs='+',
|
||||
help='target datasets (delimited by space)')
|
||||
parser.add_argument('-j', '--workers', default=4, type=int,
|
||||
help='number of data loading workers (tips: 4 or 8 times number of gpus)')
|
||||
parser.add_argument('--split-id', type=int, default=0,
|
||||
help='split index (note: 0-based)')
|
||||
parser.add_argument('--height', type=int, default=256,
|
||||
help='height of an image')
|
||||
parser.add_argument('--width', type=int, default=128,
|
||||
help='width of an image')
|
||||
parser.add_argument('--train-sampler', type=str, default='RandomSampler',
|
||||
help='sampler for trainloader')
|
||||
parser.add_argument('--combineall', action='store_true',
|
||||
help='combine all data in a dataset (train+query+gallery) for training')
|
||||
|
||||
# ************************************************************
|
||||
# Data augmentation
|
||||
# ************************************************************
|
||||
parser.add_argument('--random-erase', action='store_true',
|
||||
help='use random erasing for data augmentation')
|
||||
parser.add_argument('--color-jitter', action='store_true',
|
||||
help='randomly change the brightness, contrast and saturation')
|
||||
parser.add_argument('--color-aug', action='store_true',
|
||||
help='randomly alter the intensities of RGB channels')
|
||||
|
||||
# ************************************************************
|
||||
# Video datasets
|
||||
# ************************************************************
|
||||
parser.add_argument('--seq-len', type=int, default=15,
|
||||
help='number of images to sample in a tracklet')
|
||||
parser.add_argument('--sample-method', type=str, default='evenly',
|
||||
help='how to sample images from a tracklet')
|
||||
parser.add_argument('--pool-tracklet-features', type=str, default='avg', choices=['avg', 'max'],
|
||||
help='how to pool features over a tracklet (for video reid)')
|
||||
|
||||
# ************************************************************
|
||||
# Dataset-specific setting
|
||||
# ************************************************************
|
||||
parser.add_argument('--cuhk03-labeled', action='store_true',
|
||||
help='use labeled images, if false, use detected images')
|
||||
parser.add_argument('--cuhk03-classic-split', action='store_true',
|
||||
help='use classic split by Li et al. CVPR\'14')
|
||||
parser.add_argument('--use-metric-cuhk03', action='store_true',
|
||||
help='use cuhk03\'s metric for evaluation')
|
||||
|
||||
parser.add_argument('--market1501-500k', action='store_true',
|
||||
help='add 500k distractors to the gallery set for market1501')
|
||||
|
||||
# ************************************************************
|
||||
# Optimization options
|
||||
# ************************************************************
|
||||
parser.add_argument('--optim', type=str, default='adam',
|
||||
help='optimization algorithm (see optimizers.py)')
|
||||
parser.add_argument('--lr', default=0.0003, type=float,
|
||||
help='initial learning rate')
|
||||
parser.add_argument('--weight-decay', default=5e-04, type=float,
|
||||
help='weight decay')
|
||||
# sgd
|
||||
parser.add_argument('--momentum', default=0.9, type=float,
|
||||
help='momentum factor for sgd and rmsprop')
|
||||
parser.add_argument('--sgd-dampening', default=0, type=float,
|
||||
help='sgd\'s dampening for momentum')
|
||||
parser.add_argument('--sgd-nesterov', action='store_true',
|
||||
help='whether to enable sgd\'s Nesterov momentum')
|
||||
# rmsprop
|
||||
parser.add_argument('--rmsprop-alpha', default=0.99, type=float,
|
||||
help='rmsprop\'s smoothing constant')
|
||||
# adam/amsgrad
|
||||
parser.add_argument('--adam-beta1', default=0.9, type=float,
|
||||
help='exponential decay rate for adam\'s first moment')
|
||||
parser.add_argument('--adam-beta2', default=0.999, type=float,
|
||||
help='exponential decay rate for adam\'s second moment')
|
||||
|
||||
# ************************************************************
|
||||
# Training hyperparameters
|
||||
# ************************************************************
|
||||
parser.add_argument('--max-epoch', default=60, type=int,
|
||||
help='maximum epochs to run')
|
||||
parser.add_argument('--start-epoch', default=0, type=int,
|
||||
help='manual epoch number (useful when restart)')
|
||||
|
||||
parser.add_argument('--train-batch-size', default=32, type=int,
|
||||
help='training batch size')
|
||||
parser.add_argument('--test-batch-size', default=100, type=int,
|
||||
help='test batch size')
|
||||
|
||||
parser.add_argument('--always-fixbase', action='store_true',
|
||||
help='always fix base network and only train specified layers')
|
||||
parser.add_argument('--fixbase-epoch', type=int, default=0,
|
||||
help='how many epochs to fix base network (only train randomly initialized classifier)')
|
||||
parser.add_argument('--open-layers', type=str, nargs='+', default=['classifier'],
|
||||
help='open specified layers for training while keeping others frozen')
|
||||
|
||||
parser.add_argument('--staged-lr', action='store_true',
|
||||
help='set different lr to different layers')
|
||||
parser.add_argument('--new-layers', type=str, nargs='+', default=['classifier'],
|
||||
help='newly added layers with default lr')
|
||||
parser.add_argument('--base-lr-mult', type=float, default=0.1,
|
||||
help='learning rate multiplier for base layers')
|
||||
|
||||
# ************************************************************
|
||||
# Learning rate scheduler options
|
||||
# ************************************************************
|
||||
parser.add_argument('--lr-scheduler', type=str, default='multi_step',
|
||||
help='learning rate scheduler (see lr_schedulers.py)')
|
||||
parser.add_argument('--stepsize', default=[20, 40], nargs='+', type=int,
|
||||
help='stepsize to decay learning rate')
|
||||
parser.add_argument('--gamma', default=0.1, type=float,
|
||||
help='learning rate decay')
|
||||
|
||||
# ************************************************************
|
||||
# Cross entropy loss-specific setting
|
||||
# ************************************************************
|
||||
parser.add_argument('--label-smooth', action='store_true',
|
||||
help='use label smoothing regularizer in cross entropy loss')
|
||||
|
||||
# ************************************************************
|
||||
# Hard triplet loss-specific setting
|
||||
# ************************************************************
|
||||
parser.add_argument('--margin', type=float, default=0.3,
|
||||
help='margin for triplet loss')
|
||||
parser.add_argument('--num-instances', type=int, default=4,
|
||||
help='number of instances per identity')
|
||||
parser.add_argument('--lambda-xent', type=float, default=1,
|
||||
help='weight to balance cross entropy loss')
|
||||
parser.add_argument('--lambda-htri', type=float, default=1,
|
||||
help='weight to balance hard triplet loss')
|
||||
|
||||
# ************************************************************
|
||||
# Architecture
|
||||
# ************************************************************
|
||||
parser.add_argument('-a', '--arch', type=str, default='resnet50')
|
||||
parser.add_argument('--no-pretrained', action='store_true',
|
||||
help='do not load pretrained weights')
|
||||
|
||||
# ************************************************************
|
||||
# Test settings
|
||||
# ************************************************************
|
||||
parser.add_argument('--load-weights', type=str, default='',
|
||||
help='load pretrained weights but ignore layers that don\'t match in size')
|
||||
parser.add_argument('--evaluate', action='store_true',
|
||||
help='evaluate only')
|
||||
parser.add_argument('--eval-freq', type=int, default=-1,
|
||||
help='evaluation frequency (set to -1 to test only in the end)')
|
||||
parser.add_argument('--start-eval', type=int, default=0,
|
||||
help='start to evaluate after a specific epoch')
|
||||
|
||||
# ************************************************************
|
||||
# Miscs
|
||||
# ************************************************************
|
||||
parser.add_argument('--print-freq', type=int, default=10,
|
||||
help='print frequency')
|
||||
parser.add_argument('--seed', type=int, default=1,
|
||||
help='manual seed')
|
||||
parser.add_argument('--resume', type=str, default='', metavar='PATH',
|
||||
help='resume from a checkpoint')
|
||||
parser.add_argument('--save-dir', type=str, default='log',
|
||||
help='path to save log and model weights')
|
||||
parser.add_argument('--use-cpu', action='store_true',
|
||||
help='use cpu')
|
||||
parser.add_argument('--gpu-devices', default='0', type=str,
|
||||
help='gpu device ids for CUDA_VISIBLE_DEVICES')
|
||||
parser.add_argument('--use-avai-gpus', action='store_true',
|
||||
help='use available gpus instead of specified devices (useful when using managed clusters)')
|
||||
parser.add_argument('--visualize-ranks', action='store_true',
|
||||
help='visualize ranked results, only available in evaluation mode')
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def image_dataset_kwargs(parsed_args):
|
||||
"""
|
||||
Build kwargs for ImageDataManager in data_manager.py from
|
||||
the parsed command-line arguments.
|
||||
"""
|
||||
return {
|
||||
'source_names': parsed_args.source_names,
|
||||
'target_names': parsed_args.target_names,
|
||||
'root': parsed_args.root,
|
||||
'split_id': parsed_args.split_id,
|
||||
'height': parsed_args.height,
|
||||
'width': parsed_args.width,
|
||||
'combineall': parsed_args.combineall,
|
||||
'train_batch_size': parsed_args.train_batch_size,
|
||||
'test_batch_size': parsed_args.test_batch_size,
|
||||
'workers': parsed_args.workers,
|
||||
'train_sampler': parsed_args.train_sampler,
|
||||
'num_instances': parsed_args.num_instances,
|
||||
'cuhk03_labeled': parsed_args.cuhk03_labeled,
|
||||
'cuhk03_classic_split': parsed_args.cuhk03_classic_split,
|
||||
'market1501_500k': parsed_args.market1501_500k,
|
||||
'random_erase': parsed_args.random_erase,
|
||||
'color_jitter': parsed_args.color_jitter,
|
||||
'color_aug': parsed_args.color_aug,
|
||||
}
|
||||
|
||||
|
||||
def video_dataset_kwargs(parsed_args):
|
||||
"""
|
||||
Build kwargs for VideoDataManager in data_manager.py from
|
||||
the parsed command-line arguments.
|
||||
"""
|
||||
return {
|
||||
'source_names': parsed_args.source_names,
|
||||
'target_names': parsed_args.target_names,
|
||||
'root': parsed_args.root,
|
||||
'split_id': parsed_args.split_id,
|
||||
'height': parsed_args.height,
|
||||
'width': parsed_args.width,
|
||||
'combineall': parsed_args.combineall,
|
||||
'train_batch_size': parsed_args.train_batch_size,
|
||||
'test_batch_size': parsed_args.test_batch_size,
|
||||
'workers': parsed_args.workers,
|
||||
'train_sampler': parsed_args.train_sampler,
|
||||
'num_instances': parsed_args.num_instances,
|
||||
'seq_len': parsed_args.seq_len,
|
||||
'sample_method': parsed_args.sample_method,
|
||||
'random_erase': parsed_args.random_erase,
|
||||
'color_jitter': parsed_args.color_jitter,
|
||||
'color_aug': parsed_args.color_aug,
|
||||
}
|
||||
|
||||
|
||||
def optimizer_kwargs(parsed_args):
|
||||
"""
|
||||
Build kwargs for optimizer in optimizers.py from
|
||||
the parsed command-line arguments.
|
||||
"""
|
||||
return {
|
||||
'optim': parsed_args.optim,
|
||||
'lr': parsed_args.lr,
|
||||
'weight_decay': parsed_args.weight_decay,
|
||||
'momentum': parsed_args.momentum,
|
||||
'sgd_dampening': parsed_args.sgd_dampening,
|
||||
'sgd_nesterov': parsed_args.sgd_nesterov,
|
||||
'rmsprop_alpha': parsed_args.rmsprop_alpha,
|
||||
'adam_beta1': parsed_args.adam_beta1,
|
||||
'adam_beta2': parsed_args.adam_beta2,
|
||||
'staged_lr': parsed_args.staged_lr,
|
||||
'new_layers': parsed_args.new_layers,
|
||||
'base_lr_mult': parsed_args.base_lr_mult,
|
||||
}
|
||||
|
||||
|
||||
def lr_scheduler_kwargs(parsed_args):
|
||||
"""
|
||||
Build kwargs for lr_scheduler in lr_schedulers.py from
|
||||
the parsed command-line arguments.
|
||||
"""
|
||||
return {
|
||||
'lr_scheduler': parsed_args.lr_scheduler,
|
||||
'stepsize': parsed_args.stepsize,
|
||||
'gamma': parsed_args.gamma,
|
||||
}
|
|
@ -1,251 +0,0 @@
|
|||
from __future__ import print_function
|
||||
from __future__ import division
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import datetime
|
||||
import os.path as osp
|
||||
import numpy as np
|
||||
import warnings
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.backends.cudnn as cudnn
|
||||
|
||||
from args import argument_parser, image_dataset_kwargs, optimizer_kwargs, lr_scheduler_kwargs
|
||||
from torchreid.data_manager import ImageDataManager
|
||||
from torchreid import models
|
||||
from torchreid.losses import CrossEntropyLoss, DeepSupervision
|
||||
from torchreid.utils.iotools import check_isfile
|
||||
from torchreid.utils.avgmeter import AverageMeter
|
||||
from torchreid.utils.loggers import Logger, RankLogger
|
||||
from torchreid.utils.torchtools import count_num_param, open_all_layers, open_specified_layers, accuracy, \
|
||||
load_pretrained_weights, save_checkpoint, resume_from_checkpoint
|
||||
from torchreid.utils.reidtools import visualize_ranked_results
|
||||
from torchreid.utils.generaltools import set_random_seed
|
||||
from torchreid.eval_metrics import evaluate
|
||||
from torchreid.optimizers import init_optimizer
|
||||
from torchreid.lr_schedulers import init_lr_scheduler
|
||||
|
||||
|
||||
# global variables
|
||||
parser = argument_parser()
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
global args
|
||||
|
||||
set_random_seed(args.seed)
|
||||
if not args.use_avai_gpus: os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_devices
|
||||
use_gpu = torch.cuda.is_available()
|
||||
if args.use_cpu: use_gpu = False
|
||||
log_name = 'test.log' if args.evaluate else 'train.log'
|
||||
sys.stdout = Logger(osp.join(args.save_dir, log_name))
|
||||
print('==========\nArgs:{}\n=========='.format(args))
|
||||
|
||||
if use_gpu:
|
||||
print('Currently using GPU {}'.format(args.gpu_devices))
|
||||
cudnn.benchmark = True
|
||||
else:
|
||||
warnings.warn('Currently using CPU, however, GPU is highly recommended')
|
||||
|
||||
print('Initializing image data manager')
|
||||
dm = ImageDataManager(use_gpu, **image_dataset_kwargs(args))
|
||||
trainloader, testloader_dict = dm.return_dataloaders()
|
||||
|
||||
print('Initializing model: {}'.format(args.arch))
|
||||
model = models.init_model(name=args.arch, num_classes=dm.num_train_pids, loss={'xent'}, pretrained=not args.no_pretrained, use_gpu=use_gpu)
|
||||
print('Model size: {:.3f} M'.format(count_num_param(model)))
|
||||
|
||||
if args.load_weights and check_isfile(args.load_weights):
|
||||
load_pretrained_weights(model, args.load_weights)
|
||||
|
||||
model = nn.DataParallel(model).cuda() if use_gpu else model
|
||||
|
||||
criterion = CrossEntropyLoss(num_classes=dm.num_train_pids, use_gpu=use_gpu, label_smooth=args.label_smooth)
|
||||
optimizer = init_optimizer(model, **optimizer_kwargs(args))
|
||||
scheduler = init_lr_scheduler(optimizer, **lr_scheduler_kwargs(args))
|
||||
|
||||
if args.resume and check_isfile(args.resume):
|
||||
args.start_epoch = resume_from_checkpoint(args.resume, model, optimizer=optimizer)
|
||||
|
||||
if args.evaluate:
|
||||
print('Evaluate only')
|
||||
|
||||
for name in args.target_names:
|
||||
print('Evaluating {} ...'.format(name))
|
||||
queryloader = testloader_dict[name]['query']
|
||||
galleryloader = testloader_dict[name]['gallery']
|
||||
distmat = test(model, queryloader, galleryloader, use_gpu, return_distmat=True)
|
||||
|
||||
if args.visualize_ranks:
|
||||
visualize_ranked_results(
|
||||
distmat, dm.return_testdataset_by_name(name),
|
||||
save_dir=osp.join(args.save_dir, 'ranked_results', name),
|
||||
topk=20
|
||||
)
|
||||
return
|
||||
|
||||
time_start = time.time()
|
||||
ranklogger = RankLogger(args.source_names, args.target_names)
|
||||
print('=> Start training')
|
||||
|
||||
if args.fixbase_epoch > 0:
|
||||
print('Train {} for {} epochs while keeping other layers frozen'.format(args.open_layers, args.fixbase_epoch))
|
||||
initial_optim_state = optimizer.state_dict()
|
||||
|
||||
for epoch in range(args.fixbase_epoch):
|
||||
train(epoch, model, criterion, optimizer, trainloader, use_gpu, fixbase=True)
|
||||
|
||||
print('Done. All layers are open to train for {} epochs'.format(args.max_epoch))
|
||||
optimizer.load_state_dict(initial_optim_state)
|
||||
|
||||
for epoch in range(args.start_epoch, args.max_epoch):
|
||||
train(epoch, model, criterion, optimizer, trainloader, use_gpu)
|
||||
|
||||
scheduler.step()
|
||||
|
||||
if (epoch + 1) > args.start_eval and args.eval_freq > 0 and (epoch + 1) % args.eval_freq == 0 or (epoch + 1) == args.max_epoch:
|
||||
print('=> Test')
|
||||
|
||||
for name in args.target_names:
|
||||
print('Evaluating {} ...'.format(name))
|
||||
queryloader = testloader_dict[name]['query']
|
||||
galleryloader = testloader_dict[name]['gallery']
|
||||
rank1 = test(model, queryloader, galleryloader, use_gpu)
|
||||
ranklogger.write(name, epoch + 1, rank1)
|
||||
|
||||
save_checkpoint({
|
||||
'state_dict': model.state_dict(),
|
||||
'rank1': rank1,
|
||||
'epoch': epoch + 1,
|
||||
'arch': args.arch,
|
||||
'optimizer': optimizer.state_dict(),
|
||||
}, args.save_dir)
|
||||
|
||||
elapsed = round(time.time() - time_start)
|
||||
elapsed = str(datetime.timedelta(seconds=elapsed))
|
||||
print('Elapsed {}'.format(elapsed))
|
||||
ranklogger.show_summary()
|
||||
|
||||
|
||||
def train(epoch, model, criterion, optimizer, trainloader, use_gpu, fixbase=False):
|
||||
losses = AverageMeter()
|
||||
accs = AverageMeter()
|
||||
batch_time = AverageMeter()
|
||||
data_time = AverageMeter()
|
||||
|
||||
model.train()
|
||||
|
||||
if fixbase or args.always_fixbase:
|
||||
open_specified_layers(model, args.open_layers)
|
||||
else:
|
||||
open_all_layers(model)
|
||||
|
||||
end = time.time()
|
||||
for batch_idx, (imgs, pids, _, _) in enumerate(trainloader):
|
||||
data_time.update(time.time() - end)
|
||||
|
||||
if use_gpu:
|
||||
imgs, pids = imgs.cuda(), pids.cuda()
|
||||
|
||||
outputs = model(imgs)
|
||||
if isinstance(outputs, (tuple, list)):
|
||||
loss = DeepSupervision(criterion, outputs, pids)
|
||||
else:
|
||||
loss = criterion(outputs, pids)
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
losses.update(loss.item(), pids.size(0))
|
||||
accs.update(accuracy(outputs, pids)[0])
|
||||
|
||||
if (batch_idx + 1) % args.print_freq == 0:
|
||||
print('Epoch: [{0}][{1}/{2}]\t'
|
||||
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
|
||||
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
|
||||
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
|
||||
'Acc {acc.val:.2f} ({acc.avg:.2f})\t'.format(
|
||||
epoch + 1, batch_idx + 1, len(trainloader),
|
||||
batch_time=batch_time,
|
||||
data_time=data_time,
|
||||
loss=losses,
|
||||
acc=accs
|
||||
))
|
||||
|
||||
end = time.time()
|
||||
|
||||
|
||||
def test(model, queryloader, galleryloader, use_gpu, ranks=[1, 5, 10, 20], return_distmat=False):
|
||||
batch_time = AverageMeter()
|
||||
|
||||
model.eval()
|
||||
|
||||
with torch.no_grad():
|
||||
qf, q_pids, q_camids = [], [], []
|
||||
for batch_idx, (imgs, pids, camids, _) in enumerate(queryloader):
|
||||
if use_gpu: imgs = imgs.cuda()
|
||||
|
||||
end = time.time()
|
||||
features = model(imgs)
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
features = features.data.cpu()
|
||||
qf.append(features)
|
||||
q_pids.extend(pids)
|
||||
q_camids.extend(camids)
|
||||
qf = torch.cat(qf, 0)
|
||||
q_pids = np.asarray(q_pids)
|
||||
q_camids = np.asarray(q_camids)
|
||||
|
||||
print('Extracted features for query set, obtained {}-by-{} matrix'.format(qf.size(0), qf.size(1)))
|
||||
|
||||
gf, g_pids, g_camids = [], [], []
|
||||
end = time.time()
|
||||
for batch_idx, (imgs, pids, camids, _) in enumerate(galleryloader):
|
||||
if use_gpu: imgs = imgs.cuda()
|
||||
|
||||
end = time.time()
|
||||
features = model(imgs)
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
features = features.data.cpu()
|
||||
gf.append(features)
|
||||
g_pids.extend(pids)
|
||||
g_camids.extend(camids)
|
||||
gf = torch.cat(gf, 0)
|
||||
g_pids = np.asarray(g_pids)
|
||||
g_camids = np.asarray(g_camids)
|
||||
|
||||
print('Extracted features for gallery set, obtained {}-by-{} matrix'.format(gf.size(0), gf.size(1)))
|
||||
|
||||
print('=> BatchTime(s)/BatchSize(img): {:.3f}/{}'.format(batch_time.avg, args.test_batch_size))
|
||||
|
||||
m, n = qf.size(0), gf.size(0)
|
||||
distmat = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) + \
|
||||
torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()
|
||||
distmat.addmm_(1, -2, qf, gf.t())
|
||||
distmat = distmat.numpy()
|
||||
|
||||
print('Computing CMC and mAP')
|
||||
cmc, mAP = evaluate(distmat, q_pids, g_pids, q_camids, g_camids, use_metric_cuhk03=args.use_metric_cuhk03)
|
||||
|
||||
print('Results ----------')
|
||||
print('mAP: {:.1%}'.format(mAP))
|
||||
print('CMC curve')
|
||||
for r in ranks:
|
||||
print('Rank-{:<3}: {:.1%}'.format(r, cmc[r-1]))
|
||||
print('------------------')
|
||||
|
||||
if return_distmat:
|
||||
return distmat
|
||||
return cmc[0]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,265 +0,0 @@
|
|||
from __future__ import print_function
|
||||
from __future__ import division
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import datetime
|
||||
import os.path as osp
|
||||
import numpy as np
|
||||
import warnings
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.backends.cudnn as cudnn
|
||||
|
||||
from args import argument_parser, image_dataset_kwargs, optimizer_kwargs, lr_scheduler_kwargs
|
||||
from torchreid.data_manager import ImageDataManager
|
||||
from torchreid import models
|
||||
from torchreid.losses import CrossEntropyLoss, TripletLoss, DeepSupervision
|
||||
from torchreid.utils.iotools import check_isfile
|
||||
from torchreid.utils.avgmeter import AverageMeter
|
||||
from torchreid.utils.loggers import Logger, RankLogger
|
||||
from torchreid.utils.torchtools import count_num_param, open_all_layers, open_specified_layers, accuracy, \
|
||||
load_pretrained_weights, save_checkpoint, resume_from_checkpoint
|
||||
from torchreid.utils.reidtools import visualize_ranked_results
|
||||
from torchreid.utils.generaltools import set_random_seed
|
||||
from torchreid.eval_metrics import evaluate
|
||||
from torchreid.samplers import RandomIdentitySampler
|
||||
from torchreid.optimizers import init_optimizer
|
||||
from torchreid.lr_schedulers import init_lr_scheduler
|
||||
|
||||
|
||||
# global variables
|
||||
parser = argument_parser()
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
global args
|
||||
|
||||
set_random_seed(args.seed)
|
||||
if not args.use_avai_gpus: os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_devices
|
||||
use_gpu = torch.cuda.is_available()
|
||||
if args.use_cpu: use_gpu = False
|
||||
log_name = 'test.log' if args.evaluate else 'train.log'
|
||||
sys.stdout = Logger(osp.join(args.save_dir, log_name))
|
||||
print('==========\nArgs:{}\n=========='.format(args))
|
||||
|
||||
if use_gpu:
|
||||
print('Currently using GPU {}'.format(args.gpu_devices))
|
||||
cudnn.benchmark = True
|
||||
else:
|
||||
warnings.warn('Currently using CPU, however, GPU is highly recommended')
|
||||
|
||||
print('Initializing image data manager')
|
||||
dm = ImageDataManager(use_gpu, **image_dataset_kwargs(args))
|
||||
trainloader, testloader_dict = dm.return_dataloaders()
|
||||
|
||||
print('Initializing model: {}'.format(args.arch))
|
||||
model = models.init_model(name=args.arch, num_classes=dm.num_train_pids, loss={'xent', 'htri'}, pretrained=not args.no_pretrained, use_gpu=use_gpu)
|
||||
print('Model size: {:.3f} M'.format(count_num_param(model)))
|
||||
|
||||
if args.load_weights and check_isfile(args.load_weights):
|
||||
load_pretrained_weights(model, args.load_weights)
|
||||
|
||||
model = nn.DataParallel(model).cuda() if use_gpu else model
|
||||
|
||||
criterion_xent = CrossEntropyLoss(num_classes=dm.num_train_pids, use_gpu=use_gpu, label_smooth=args.label_smooth)
|
||||
criterion_htri = TripletLoss(margin=args.margin)
|
||||
optimizer = init_optimizer(model, **optimizer_kwargs(args))
|
||||
scheduler = init_lr_scheduler(optimizer, **lr_scheduler_kwargs(args))
|
||||
|
||||
if args.resume and check_isfile(args.resume):
|
||||
args.start_epoch = resume_from_checkpoint(args.resume, model, optimizer=optimizer)
|
||||
|
||||
if args.evaluate:
|
||||
print('Evaluate only')
|
||||
|
||||
for name in args.target_names:
|
||||
print('Evaluating {} ...'.format(name))
|
||||
queryloader = testloader_dict[name]['query']
|
||||
galleryloader = testloader_dict[name]['gallery']
|
||||
distmat = test(model, queryloader, galleryloader, use_gpu, return_distmat=True)
|
||||
|
||||
if args.visualize_ranks:
|
||||
visualize_ranked_results(
|
||||
distmat, dm.return_testdataset_by_name(name),
|
||||
save_dir=osp.join(args.save_dir, 'ranked_results', name),
|
||||
topk=20
|
||||
)
|
||||
return
|
||||
|
||||
time_start = time.time()
|
||||
ranklogger = RankLogger(args.source_names, args.target_names)
|
||||
print('=> Start training')
|
||||
|
||||
if args.fixbase_epoch > 0:
|
||||
print('Train {} for {} epochs while keeping other layers frozen'.format(args.open_layers, args.fixbase_epoch))
|
||||
initial_optim_state = optimizer.state_dict()
|
||||
|
||||
for epoch in range(args.fixbase_epoch):
|
||||
train(epoch, model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu, fixbase=True)
|
||||
|
||||
print('Done. All layers are open to train for {} epochs'.format(args.max_epoch))
|
||||
optimizer.load_state_dict(initial_optim_state)
|
||||
|
||||
for epoch in range(args.start_epoch, args.max_epoch):
|
||||
train(epoch, model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu)
|
||||
|
||||
scheduler.step()
|
||||
|
||||
if (epoch + 1) > args.start_eval and args.eval_freq > 0 and (epoch + 1) % args.eval_freq == 0 or (epoch + 1) == args.max_epoch:
|
||||
print('=> Test')
|
||||
|
||||
for name in args.target_names:
|
||||
print('Evaluating {} ...'.format(name))
|
||||
queryloader = testloader_dict[name]['query']
|
||||
galleryloader = testloader_dict[name]['gallery']
|
||||
rank1 = test(model, queryloader, galleryloader, use_gpu)
|
||||
ranklogger.write(name, epoch + 1, rank1)
|
||||
|
||||
save_checkpoint({
|
||||
'state_dict': model.state_dict(),
|
||||
'rank1': rank1,
|
||||
'epoch': epoch + 1,
|
||||
'arch': args.arch,
|
||||
'optimizer': optimizer.state_dict(),
|
||||
}, args.save_dir)
|
||||
|
||||
elapsed = round(time.time() - time_start)
|
||||
elapsed = str(datetime.timedelta(seconds=elapsed))
|
||||
print('Elapsed {}'.format(elapsed))
|
||||
ranklogger.show_summary()
|
||||
|
||||
|
||||
def train(epoch, model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu, fixbase=False):
|
||||
xent_losses = AverageMeter()
|
||||
htri_losses = AverageMeter()
|
||||
accs = AverageMeter()
|
||||
batch_time = AverageMeter()
|
||||
data_time = AverageMeter()
|
||||
|
||||
model.train()
|
||||
|
||||
if fixbase or args.always_fixbase:
|
||||
open_specified_layers(model, args.open_layers)
|
||||
else:
|
||||
open_all_layers(model)
|
||||
|
||||
end = time.time()
|
||||
for batch_idx, (imgs, pids, _, _) in enumerate(trainloader):
|
||||
data_time.update(time.time() - end)
|
||||
|
||||
if use_gpu:
|
||||
imgs, pids = imgs.cuda(), pids.cuda()
|
||||
|
||||
outputs, features = model(imgs)
|
||||
if isinstance(outputs, (tuple, list)):
|
||||
xent_loss = DeepSupervision(criterion_xent, outputs, pids)
|
||||
else:
|
||||
xent_loss = criterion_xent(outputs, pids)
|
||||
|
||||
if isinstance(features, (tuple, list)):
|
||||
htri_loss = DeepSupervision(criterion_htri, features, pids)
|
||||
else:
|
||||
htri_loss = criterion_htri(features, pids)
|
||||
|
||||
loss = args.lambda_xent * xent_loss + args.lambda_htri * htri_loss
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
xent_losses.update(xent_loss.item(), pids.size(0))
|
||||
htri_losses.update(htri_loss.item(), pids.size(0))
|
||||
accs.update(accuracy(outputs, pids)[0])
|
||||
|
||||
if (batch_idx + 1) % args.print_freq == 0:
|
||||
print('Epoch: [{0}][{1}/{2}]\t'
|
||||
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
|
||||
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
|
||||
'Xent {xent.val:.4f} ({xent.avg:.4f})\t'
|
||||
'Htri {htri.val:.4f} ({htri.avg:.4f})\t'
|
||||
'Acc {acc.val:.2f} ({acc.avg:.2f})\t'.format(
|
||||
epoch + 1, batch_idx + 1, len(trainloader),
|
||||
batch_time=batch_time,
|
||||
data_time=data_time,
|
||||
xent=xent_losses,
|
||||
htri=htri_losses,
|
||||
acc=accs
|
||||
))
|
||||
|
||||
end = time.time()
|
||||
|
||||
|
||||
def test(model, queryloader, galleryloader, use_gpu, ranks=[1, 5, 10, 20], return_distmat=False):
|
||||
batch_time = AverageMeter()
|
||||
|
||||
model.eval()
|
||||
|
||||
with torch.no_grad():
|
||||
qf, q_pids, q_camids = [], [], []
|
||||
for batch_idx, (imgs, pids, camids, _) in enumerate(queryloader):
|
||||
if use_gpu:
|
||||
imgs = imgs.cuda()
|
||||
|
||||
end = time.time()
|
||||
features = model(imgs)
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
features = features.data.cpu()
|
||||
qf.append(features)
|
||||
q_pids.extend(pids)
|
||||
q_camids.extend(camids)
|
||||
qf = torch.cat(qf, 0)
|
||||
q_pids = np.asarray(q_pids)
|
||||
q_camids = np.asarray(q_camids)
|
||||
|
||||
print('Extracted features for query set, obtained {}-by-{} matrix'.format(qf.size(0), qf.size(1)))
|
||||
|
||||
gf, g_pids, g_camids = [], [], []
|
||||
for batch_idx, (imgs, pids, camids, _) in enumerate(galleryloader):
|
||||
if use_gpu:
|
||||
imgs = imgs.cuda()
|
||||
|
||||
end = time.time()
|
||||
features = model(imgs)
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
features = features.data.cpu()
|
||||
gf.append(features)
|
||||
g_pids.extend(pids)
|
||||
g_camids.extend(camids)
|
||||
gf = torch.cat(gf, 0)
|
||||
g_pids = np.asarray(g_pids)
|
||||
g_camids = np.asarray(g_camids)
|
||||
|
||||
print('Extracted features for gallery set, obtained {}-by-{} matrix'.format(gf.size(0), gf.size(1)))
|
||||
|
||||
print('=> BatchTime(s)/BatchSize(img): {:.3f}/{}'.format(batch_time.avg, args.test_batch_size))
|
||||
|
||||
m, n = qf.size(0), gf.size(0)
|
||||
distmat = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) + \
|
||||
torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()
|
||||
distmat.addmm_(1, -2, qf, gf.t())
|
||||
distmat = distmat.numpy()
|
||||
|
||||
print('Computing CMC and mAP')
|
||||
cmc, mAP = evaluate(distmat, q_pids, g_pids, q_camids, g_camids, use_metric_cuhk03=args.use_metric_cuhk03)
|
||||
|
||||
print('Results ----------')
|
||||
print('mAP: {:.1%}'.format(mAP))
|
||||
print('CMC curve')
|
||||
for r in ranks:
|
||||
print('Rank-{:<3}: {:.1%}'.format(r, cmc[r-1]))
|
||||
print('------------------')
|
||||
|
||||
if return_distmat:
|
||||
return distmat
|
||||
return cmc[0]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,265 +0,0 @@
|
|||
from __future__ import print_function
|
||||
from __future__ import division
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import datetime
|
||||
import os.path as osp
|
||||
import numpy as np
|
||||
import warnings
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.backends.cudnn as cudnn
|
||||
from torch.utils.data import DataLoader
|
||||
|
||||
from args import argument_parser, video_dataset_kwargs, optimizer_kwargs, lr_scheduler_kwargs
|
||||
from torchreid.data_manager import VideoDataManager
|
||||
from torchreid import models
|
||||
from torchreid.losses import CrossEntropyLoss
|
||||
from torchreid.utils.iotools import check_isfile
|
||||
from torchreid.utils.avgmeter import AverageMeter
|
||||
from torchreid.utils.loggers import Logger, RankLogger
|
||||
from torchreid.utils.torchtools import count_num_param, open_all_layers, open_specified_layers, accuracy, \
|
||||
load_pretrained_weights, save_checkpoint, resume_from_checkpoint
|
||||
from torchreid.utils.reidtools import visualize_ranked_results
|
||||
from torchreid.utils.generaltools import set_random_seed
|
||||
from torchreid.eval_metrics import evaluate
|
||||
from torchreid.optimizers import init_optimizer
|
||||
from torchreid.lr_schedulers import init_lr_scheduler
|
||||
|
||||
|
||||
# global variables
|
||||
parser = argument_parser()
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
global args
|
||||
|
||||
set_random_seed(args.seed)
|
||||
if not args.use_avai_gpus: os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_devices
|
||||
use_gpu = torch.cuda.is_available()
|
||||
if args.use_cpu: use_gpu = False
|
||||
log_name = 'test.log' if args.evaluate else 'train.log'
|
||||
sys.stdout = Logger(osp.join(args.save_dir, log_name))
|
||||
print('==========\nArgs:{}\n=========='.format(args))
|
||||
|
||||
if use_gpu:
|
||||
print('Currently using GPU {}'.format(args.gpu_devices))
|
||||
cudnn.benchmark = True
|
||||
else:
|
||||
warnings.warn('Currently using CPU, however, GPU is highly recommended')
|
||||
|
||||
print('Initializing video data manager')
|
||||
dm = VideoDataManager(use_gpu, **video_dataset_kwargs(args))
|
||||
trainloader, testloader_dict = dm.return_dataloaders()
|
||||
|
||||
print('Initializing model: {}'.format(args.arch))
|
||||
model = models.init_model(name=args.arch, num_classes=dm.num_train_pids, loss={'xent'}, pretrained=not args.no_pretrained, use_gpu=use_gpu)
|
||||
print('Model size: {:.3f} M'.format(count_num_param(model)))
|
||||
|
||||
if args.load_weights and check_isfile(args.load_weights):
|
||||
load_pretrained_weights(model, args.load_weights)
|
||||
|
||||
model = nn.DataParallel(model).cuda() if use_gpu else model
|
||||
|
||||
criterion = CrossEntropyLoss(num_classes=dm.num_train_pids, use_gpu=use_gpu, label_smooth=args.label_smooth)
|
||||
optimizer = init_optimizer(model, **optimizer_kwargs(args))
|
||||
scheduler = init_lr_scheduler(optimizer, **lr_scheduler_kwargs(args))
|
||||
|
||||
if args.resume and check_isfile(args.resume):
|
||||
args.start_epoch = resume_from_checkpoint(args.resume, model, optimizer=optimizer)
|
||||
|
||||
if args.evaluate:
|
||||
print('Evaluate only')
|
||||
|
||||
for name in args.target_names:
|
||||
print('Evaluating {} ...'.format(name))
|
||||
queryloader = testloader_dict[name]['query']
|
||||
galleryloader = testloader_dict[name]['gallery']
|
||||
distmat = test(model, queryloader, galleryloader, args.pool_tracklet_features, use_gpu, return_distmat=True)
|
||||
|
||||
if args.visualize_ranks:
|
||||
visualize_ranked_results(
|
||||
distmat, dm.return_testdataset_by_name(name),
|
||||
save_dir=osp.join(args.save_dir, 'ranked_results', name),
|
||||
topk=20
|
||||
)
|
||||
return
|
||||
|
||||
time_start = time.time()
|
||||
ranklogger = RankLogger(args.source_names, args.target_names)
|
||||
print('=> Start training')
|
||||
|
||||
if args.fixbase_epoch > 0:
|
||||
print('Train {} for {} epochs while keeping other layers frozen'.format(args.open_layers, args.fixbase_epoch))
|
||||
initial_optim_state = optimizer.state_dict()
|
||||
|
||||
for epoch in range(args.fixbase_epoch):
|
||||
train(epoch, model, criterion, optimizer, trainloader, use_gpu, fixbase=True)
|
||||
|
||||
print('Done. All layers are open to train for {} epochs'.format(args.max_epoch))
|
||||
optimizer.load_state_dict(initial_optim_state)
|
||||
|
||||
for epoch in range(args.start_epoch, args.max_epoch):
|
||||
train(epoch, model, criterion, optimizer, trainloader, use_gpu)
|
||||
|
||||
scheduler.step()
|
||||
|
||||
if (epoch + 1) > args.start_eval and args.eval_freq > 0 and (epoch + 1) % args.eval_freq == 0 or (epoch + 1) == args.max_epoch:
|
||||
print('=> Test')
|
||||
|
||||
for name in args.target_names:
|
||||
print('Evaluating {} ...'.format(name))
|
||||
queryloader = testloader_dict[name]['query']
|
||||
galleryloader = testloader_dict[name]['gallery']
|
||||
rank1 = test(model, queryloader, galleryloader, args.pool_tracklet_features, use_gpu)
|
||||
ranklogger.write(name, epoch + 1, rank1)
|
||||
|
||||
save_checkpoint({
|
||||
'state_dict': model.state_dict(),
|
||||
'rank1': rank1,
|
||||
'epoch': epoch + 1,
|
||||
'arch': args.arch,
|
||||
'optimizer': optimizer.state_dict(),
|
||||
}, args.save_dir)
|
||||
|
||||
elapsed = round(time.time() - time_start)
|
||||
elapsed = str(datetime.timedelta(seconds=elapsed))
|
||||
print('Elapsed {}'.format(elapsed))
|
||||
ranklogger.show_summary()
|
||||
|
||||
|
||||
def train(epoch, model, criterion, optimizer, trainloader, use_gpu, fixbase=False):
|
||||
losses = AverageMeter()
|
||||
accs = AverageMeter()
|
||||
batch_time = AverageMeter()
|
||||
data_time = AverageMeter()
|
||||
|
||||
model.train()
|
||||
|
||||
if fixbase or args.always_fixbase:
|
||||
open_specified_layers(model, args.open_layers)
|
||||
else:
|
||||
open_all_layers(model)
|
||||
|
||||
end = time.time()
|
||||
for batch_idx, (imgs, pids, _, _) in enumerate(trainloader):
|
||||
data_time.update(time.time() - end)
|
||||
|
||||
if use_gpu:
|
||||
imgs, pids = imgs.cuda(), pids.cuda()
|
||||
|
||||
outputs = model(imgs)
|
||||
if isinstance(outputs, (tuple, list)):
|
||||
loss = DeepSupervision(criterion, outputs, pids)
|
||||
else:
|
||||
loss = criterion(outputs, pids)
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
losses.update(loss.item(), pids.size(0))
|
||||
accs.update(accuracy(outputs, pids)[0])
|
||||
|
||||
if (batch_idx + 1) % args.print_freq == 0:
|
||||
print('Epoch: [{0}][{1}/{2}]\t'
|
||||
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
|
||||
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
|
||||
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
|
||||
'Acc {acc.val:.2f} ({acc.avg:.2f})\t'.format(
|
||||
epoch + 1, batch_idx + 1, len(trainloader),
|
||||
batch_time=batch_time,
|
||||
data_time=data_time,
|
||||
loss=losses,
|
||||
acc=accs
|
||||
))
|
||||
|
||||
end = time.time()
|
||||
|
||||
|
||||
def test(model, queryloader, galleryloader, pool, use_gpu, ranks=[1, 5, 10, 20], return_distmat=False):
|
||||
batch_time = AverageMeter()
|
||||
|
||||
model.eval()
|
||||
|
||||
with torch.no_grad():
|
||||
qf, q_pids, q_camids = [], [], []
|
||||
for batch_idx, (imgs, pids, camids) in enumerate(queryloader):
|
||||
if use_gpu: imgs = imgs.cuda()
|
||||
b, s, c, h, w = imgs.size()
|
||||
imgs = imgs.view(b*s, c, h, w)
|
||||
|
||||
end = time.time()
|
||||
features = model(imgs)
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
features = features.view(b, s, -1)
|
||||
if pool == 'avg':
|
||||
features = torch.mean(features, 1)
|
||||
else:
|
||||
features, _ = torch.max(features, 1)
|
||||
features = features.data.cpu()
|
||||
qf.append(features)
|
||||
q_pids.extend(pids)
|
||||
q_camids.extend(camids)
|
||||
qf = torch.cat(qf, 0)
|
||||
q_pids = np.asarray(q_pids)
|
||||
q_camids = np.asarray(q_camids)
|
||||
|
||||
print('Extracted features for query set, obtained {}-by-{} matrix'.format(qf.size(0), qf.size(1)))
|
||||
|
||||
gf, g_pids, g_camids = [], [], []
|
||||
for batch_idx, (imgs, pids, camids) in enumerate(galleryloader):
|
||||
if use_gpu: imgs = imgs.cuda()
|
||||
b, s, c, h, w = imgs.size()
|
||||
imgs = imgs.view(b*s, c, h, w)
|
||||
|
||||
end = time.time()
|
||||
features = model(imgs)
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
features = features.view(b, s, -1)
|
||||
if pool == 'avg':
|
||||
features = torch.mean(features, 1)
|
||||
else:
|
||||
features, _ = torch.max(features, 1)
|
||||
features = features.data.cpu()
|
||||
gf.append(features)
|
||||
g_pids.extend(pids)
|
||||
g_camids.extend(camids)
|
||||
gf = torch.cat(gf, 0)
|
||||
g_pids = np.asarray(g_pids)
|
||||
g_camids = np.asarray(g_camids)
|
||||
|
||||
print('Extracted features for gallery set, obtained {}-by-{} matrix'.format(gf.size(0), gf.size(1)))
|
||||
|
||||
print('=> BatchTime(s)/BatchSize(img): {:.3f}/{}'.format(batch_time.avg, args.test_batch_size * args.seq_len))
|
||||
|
||||
m, n = qf.size(0), gf.size(0)
|
||||
distmat = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) + \
|
||||
torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()
|
||||
distmat.addmm_(1, -2, qf, gf.t())
|
||||
distmat = distmat.numpy()
|
||||
|
||||
print('Computing CMC and mAP')
|
||||
cmc, mAP = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)
|
||||
|
||||
print('Results ----------')
|
||||
print('mAP: {:.1%}'.format(mAP))
|
||||
print('CMC curve')
|
||||
for r in ranks:
|
||||
print('Rank-{:<3}: {:.1%}'.format(r, cmc[r-1]))
|
||||
print('------------------')
|
||||
|
||||
if return_distmat:
|
||||
return distmat
|
||||
return cmc[0]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,278 +0,0 @@
|
|||
from __future__ import print_function
|
||||
from __future__ import division
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import datetime
|
||||
import os.path as osp
|
||||
import numpy as np
|
||||
import warnings
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.backends.cudnn as cudnn
|
||||
from torch.utils.data import DataLoader
|
||||
|
||||
from args import argument_parser, video_dataset_kwargs, optimizer_kwargs, lr_scheduler_kwargs
|
||||
from torchreid.data_manager import VideoDataManager
|
||||
from torchreid import models
|
||||
from torchreid.losses import CrossEntropyLoss, TripletLoss, DeepSupervision
|
||||
from torchreid.utils.iotools import check_isfile
|
||||
from torchreid.utils.avgmeter import AverageMeter
|
||||
from torchreid.utils.loggers import Logger, RankLogger
|
||||
from torchreid.utils.torchtools import count_num_param, open_all_layers, open_specified_layers, accuracy, \
|
||||
load_pretrained_weights, save_checkpoint, resume_from_checkpoint
|
||||
from torchreid.utils.reidtools import visualize_ranked_results
|
||||
from torchreid.utils.generaltools import set_random_seed
|
||||
from torchreid.eval_metrics import evaluate
|
||||
from torchreid.samplers import RandomIdentitySampler
|
||||
from torchreid.optimizers import init_optimizer
|
||||
from torchreid.lr_schedulers import init_lr_scheduler
|
||||
|
||||
|
||||
# global variables
|
||||
parser = argument_parser()
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
global args
|
||||
|
||||
set_random_seed(args.seed)
|
||||
if not args.use_avai_gpus: os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_devices
|
||||
use_gpu = torch.cuda.is_available()
|
||||
if args.use_cpu: use_gpu = False
|
||||
log_name = 'test.log' if args.evaluate else 'train.log'
|
||||
sys.stdout = Logger(osp.join(args.save_dir, log_name))
|
||||
print('==========\nArgs:{}\n=========='.format(args))
|
||||
|
||||
if use_gpu:
|
||||
print('Currently using GPU {}'.format(args.gpu_devices))
|
||||
cudnn.benchmark = True
|
||||
else:
|
||||
warnings.warn('Currently using CPU, however, GPU is highly recommended')
|
||||
|
||||
print('Initializing video data manager')
|
||||
dm = VideoDataManager(use_gpu, **video_dataset_kwargs(args))
|
||||
trainloader, testloader_dict = dm.return_dataloaders()
|
||||
|
||||
print('Initializing model: {}'.format(args.arch))
|
||||
model = models.init_model(name=args.arch, num_classes=dm.num_train_pids, loss={'xent', 'htri'}, pretrained=not args.no_pretrained, use_gpu=use_gpu)
|
||||
print('Model size: {:.3f} M'.format(count_num_param(model)))
|
||||
|
||||
if args.load_weights and check_isfile(args.load_weights):
|
||||
load_pretrained_weights(model, args.load_weights)
|
||||
|
||||
model = nn.DataParallel(model).cuda() if use_gpu else model
|
||||
|
||||
criterion = CrossEntropyLoss(num_classes=dm.num_train_pids, use_gpu=use_gpu, label_smooth=args.label_smooth)
|
||||
criterion_htri = TripletLoss(margin=args.margin)
|
||||
optimizer = init_optimizer(model, **optimizer_kwargs(args))
|
||||
scheduler = init_lr_scheduler(optimizer, **lr_scheduler_kwargs(args))
|
||||
|
||||
if args.resume and check_isfile(args.resume):
|
||||
args.start_epoch = resume_from_checkpoint(args.resume, model, optimizer=optimizer)
|
||||
|
||||
if args.evaluate:
|
||||
print('Evaluate only')
|
||||
|
||||
for name in args.target_names:
|
||||
print('Evaluating {} ...'.format(name))
|
||||
queryloader = testloader_dict[name]['query']
|
||||
galleryloader = testloader_dict[name]['gallery']
|
||||
distmat = test(model, queryloader, galleryloader, args.pool_tracklet_features, use_gpu, return_distmat=True)
|
||||
|
||||
if args.visualize_ranks:
|
||||
visualize_ranked_results(
|
||||
distmat, dm.return_testdataset_by_name(name),
|
||||
save_dir=osp.join(args.save_dir, 'ranked_results', name),
|
||||
topk=20
|
||||
)
|
||||
return
|
||||
|
||||
time_start = time.time()
|
||||
ranklogger = RankLogger(args.source_names, args.target_names)
|
||||
print('=> Start training')
|
||||
|
||||
if args.fixbase_epoch > 0:
|
||||
print('Train {} for {} epochs while keeping other layers frozen'.format(args.open_layers, args.fixbase_epoch))
|
||||
initial_optim_state = optimizer.state_dict()
|
||||
|
||||
for epoch in range(args.fixbase_epoch):
|
||||
train(epoch, model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu, fixbase=True)
|
||||
|
||||
print('Done. All layers are open to train for {} epochs'.format(args.max_epoch))
|
||||
optimizer.load_state_dict(initial_optim_state)
|
||||
|
||||
for epoch in range(args.start_epoch, args.max_epoch):
|
||||
train(epoch, model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu)
|
||||
|
||||
scheduler.step()
|
||||
|
||||
if (epoch + 1) > args.start_eval and args.eval_freq > 0 and (epoch + 1) % args.eval_freq == 0 or (epoch + 1) == args.max_epoch:
|
||||
print('=> Test')
|
||||
|
||||
for name in args.target_names:
|
||||
print('Evaluating {} ...'.format(name))
|
||||
queryloader = testloader_dict[name]['query']
|
||||
galleryloader = testloader_dict[name]['gallery']
|
||||
rank1 = test(model, queryloader, galleryloader, args.pool_tracklet_features, use_gpu)
|
||||
ranklogger.write(name, epoch + 1, rank1)
|
||||
|
||||
save_checkpoint({
|
||||
'state_dict': model.state_dict(),
|
||||
'rank1': rank1,
|
||||
'epoch': epoch + 1,
|
||||
'arch': args.arch,
|
||||
'optimizer': optimizer.state_dict(),
|
||||
}, args.save_dir)
|
||||
|
||||
elapsed = round(time.time() - time_start)
|
||||
elapsed = str(datetime.timedelta(seconds=elapsed))
|
||||
print('Elapsed {}'.format(elapsed))
|
||||
ranklogger.show_summary()
|
||||
|
||||
|
||||
def train(epoch, model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu, fixbase=False):
|
||||
xent_losses = AverageMeter()
|
||||
htri_losses = AverageMeter()
|
||||
accs = AverageMeter()
|
||||
batch_time = AverageMeter()
|
||||
data_time = AverageMeter()
|
||||
|
||||
model.train()
|
||||
|
||||
if fixbase or args.always_fixbase:
|
||||
open_specified_layers(model, args.open_layers)
|
||||
else:
|
||||
open_all_layers(model)
|
||||
|
||||
end = time.time()
|
||||
for batch_idx, (imgs, pids, _, _) in enumerate(trainloader):
|
||||
data_time.update(time.time() - end)
|
||||
|
||||
if use_gpu:
|
||||
imgs, pids = imgs.cuda(), pids.cuda()
|
||||
|
||||
outputs, features = model(imgs)
|
||||
if isinstance(outputs, (tuple, list)):
|
||||
xent_loss = DeepSupervision(criterion_xent, outputs, pids)
|
||||
else:
|
||||
xent_loss = criterion_xent(outputs, pids)
|
||||
|
||||
if isinstance(features, (tuple, list)):
|
||||
htri_loss = DeepSupervision(criterion_htri, features, pids)
|
||||
else:
|
||||
htri_loss = criterion_htri(features, pids)
|
||||
|
||||
loss = args.lambda_xent * xent_loss + args.lambda_htri * htri_loss
|
||||
optimizer.zero_grad()
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
xent_losses.update(xent_loss.item(), pids.size(0))
|
||||
htri_losses.update(htri_loss.item(), pids.size(0))
|
||||
accs.update(accuracy(outputs, pids)[0])
|
||||
|
||||
if (batch_idx + 1) % args.print_freq == 0:
|
||||
print('Epoch: [{0}][{1}/{2}]\t'
|
||||
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
|
||||
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
|
||||
'Xent {xent.val:.4f} ({xent.avg:.4f})\t'
|
||||
'Htri {htri.val:.4f} ({htri.avg:.4f})\t'
|
||||
'Acc {acc.val:.2f} ({acc.avg:.2f})\t'.format(
|
||||
epoch + 1, batch_idx + 1, len(trainloader),
|
||||
batch_time=batch_time,
|
||||
data_time=data_time,
|
||||
xent=xent_losses,
|
||||
htri=htri_losses,
|
||||
acc=accs
|
||||
))
|
||||
|
||||
end = time.time()
|
||||
|
||||
|
||||
def test(model, queryloader, galleryloader, pool, use_gpu, ranks=[1, 5, 10, 20], return_distmat=False):
|
||||
batch_time = AverageMeter()
|
||||
|
||||
model.eval()
|
||||
|
||||
with torch.no_grad():
|
||||
qf, q_pids, q_camids = [], [], []
|
||||
for batch_idx, (imgs, pids, camids) in enumerate(queryloader):
|
||||
if use_gpu: imgs = imgs.cuda()
|
||||
b, s, c, h, w = imgs.size()
|
||||
imgs = imgs.view(b*s, c, h, w)
|
||||
|
||||
end = time.time()
|
||||
features = model(imgs)
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
features = features.view(b, s, -1)
|
||||
if pool == 'avg':
|
||||
features = torch.mean(features, 1)
|
||||
else:
|
||||
features, _ = torch.max(features, 1)
|
||||
features = features.data.cpu()
|
||||
qf.append(features)
|
||||
q_pids.extend(pids)
|
||||
q_camids.extend(camids)
|
||||
qf = torch.cat(qf, 0)
|
||||
q_pids = np.asarray(q_pids)
|
||||
q_camids = np.asarray(q_camids)
|
||||
|
||||
print('Extracted features for query set, obtained {}-by-{} matrix'.format(qf.size(0), qf.size(1)))
|
||||
|
||||
gf, g_pids, g_camids = [], [], []
|
||||
for batch_idx, (imgs, pids, camids) in enumerate(galleryloader):
|
||||
if use_gpu: imgs = imgs.cuda()
|
||||
b, s, c, h, w = imgs.size()
|
||||
imgs = imgs.view(b*s, c, h, w)
|
||||
|
||||
end = time.time()
|
||||
features = model(imgs)
|
||||
batch_time.update(time.time() - end)
|
||||
|
||||
features = features.view(b, s, -1)
|
||||
if pool == 'avg':
|
||||
features = torch.mean(features, 1)
|
||||
else:
|
||||
features, _ = torch.max(features, 1)
|
||||
features = features.data.cpu()
|
||||
gf.append(features)
|
||||
g_pids.extend(pids)
|
||||
g_camids.extend(camids)
|
||||
gf = torch.cat(gf, 0)
|
||||
g_pids = np.asarray(g_pids)
|
||||
g_camids = np.asarray(g_camids)
|
||||
|
||||
print('Extracted features for gallery set, obtained {}-by-{} matrix'.format(gf.size(0), gf.size(1)))
|
||||
|
||||
print('=> BatchTime(s)/BatchSize(img): {:.3f}/{}'.format(batch_time.avg, args.test_batch_size * args.seq_len))
|
||||
|
||||
m, n = qf.size(0), gf.size(0)
|
||||
distmat = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) + \
|
||||
torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()
|
||||
distmat.addmm_(1, -2, qf, gf.t())
|
||||
distmat = distmat.numpy()
|
||||
|
||||
print('Computing CMC and mAP')
|
||||
cmc, mAP = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)
|
||||
|
||||
print('Results ----------')
|
||||
print('mAP: {:.1%}'.format(mAP))
|
||||
print('CMC curve')
|
||||
for r in ranks:
|
||||
print('Rank-{:<3}: {:.1%}'.format(r, cmc[r-1]))
|
||||
print('------------------')
|
||||
|
||||
if return_distmat:
|
||||
return distmat
|
||||
return cmc[0]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
Loading…
Reference in New Issue