In MMSelfSup, we provide many benchmarks, thus the models can be evaluated on different downstream tasks. Here are comprehensive tutorials and examples to explain how to run all benchmarks with MMSelfSup.
- [Tutorial 6: Run Benchmarks](#tutorial-6-run-benchmarks)
-`CHECKPOINT`: the checkpoint file of a selfsup method named as epoch_*.pth.
-`MODEL_FILE`: the output backbone weights file. If not mentioned, the `PRETRAIN` below uses this extracted model file.
## Classification
As for classification, we provide scripts in folder `tools/benchmarks/classification/`, which has 4 `.sh` files and 1 folder for VOC SVM related classification task.
### VOC SVM / Low-shot SVM
To run this benchmarks, you should first prepare your VOC datasets, the details of data prepareation please refer to [data_prepare.md](../data_prepare.md).
To evaluate the pretrained models, you can run command below.
**To test with ckpt, the code uses the epoch_*.pth file, there is no need to extract weights.**
Remarks:
-`${SELFSUP_CONFIG}` is the config file of the self-supervised experiment.
-`${FEATURE_LIST}` is a string to specify features from layer1 to layer5 to evaluate; e.g., if you want to evaluate layer5 only, then `FEATURE_LIST` is "feat5", if you want to evaluate all features, then then `FEATURE_LIST` is "feat1 feat2 feat3 feat4 feat5" (separated by space). If left empty, the default `FEATURE_LIST` is "feat5".
-`PRETRAIN`: the pretrained model file.
- if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command.
-`EPOCH` is the epoch number of the ckpt that you want to test
### Linear Evaluation
The linear evaluation is one of the most general benchmarks, we integrate several papers' config settings, also including multi-head linear evaluation. We write classification model in our own codebase for the multi-head function, thus, to run linear evaluation, we still use `.sh` script to launch training. The supported datasets are **ImageNet**, **Places205** and **iNaturalist18**.
- The default GPU number is 8. When changing GPUS, please also change imgs_per_gpu in the config file accordingly to ensure the total batch size is 256.
-`CONFIG`: Use config files under `configs/benchmarks/classification/`, excluding svm_voc07.py and tsne_imagenet.py and imagenet_*percent folders.
-`PRETRAIN`: the pretrained model file.
### ImageNet Semi-supervised Classification
To run ImageNet semi-supervised classification, we still use `.sh` script to launch training.
Here we prefer to use MMDetection to do the detection task. First, make sure you have installed [MIM](https://github.com/open-mmlab/mim), which is also a project of OpenMMLab.
Besides, please refer to MMDet for [installation](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/get_started.md) and [data preparation](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/1_exist_data_model.md)
-`CONFIG`: Use config files under `configs/benchmarks/mmdetection/` or write your own config files
-`PRETRAIN`: the pretrained model file.
Or if you want to do detection task with [detectron2](https://github.com/facebookresearch/detectron2), we also provides some config files.
Please refer [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md) for installation and follow the [directory structure](https://github.com/facebookresearch/detectron2/tree/main/datasets) to prepare your datasets required by detectron2.
```
conda activate detectron2 # use detectron2 environment here, otherwise use open-mmlab environment
cd benchmarks/detection
python convert-pretrain-to-detectron2.py ${WEIGHT_FILE} ${OUTPUT_FILE} # must use .pkl as the output extension.
For semantic segmentation task, we are using MMSegmentation. First, make sure you have installed [MIM](https://github.com/open-mmlab/mim), which is also a project of OpenMMLab.
Besides, please refer to MMSeg for [installation](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/get_started.md) and [data preparation](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/dataset_prepare.md#prepare-datasets).
After installation, you can run MMSeg with simple command