mmpretrain/projects/ood_eval
Yuan Liu ce3fa7b3fb
[Feature] Support Out-of-Distribution datasets like ImageNet-A,R,S,C. (#1342)
* [Feature]: Support ImageNet-A,R,S

* [Feature]: Add doc for ood eval

* [Feature]: Add example config

* [Feature]: Add mCE evaluator

* [Fix]: Fix key error

* [Feature]: Add mCE for ImageNet-C

* [Feature]: Add ImageNet-C example

* [Feature]: Add doc for ImageNet-C ft

* [Fix]: Fix bug

* [Fix]: Fix lint

* [Fix]: Fix suggestion

* [Fix]: Fix codespell

* [Fix]: Fix lint

* [Feature]: Add gen annotation

* [Fix]: Fix lint

* [Fix]: Fix index mask bug
2023-03-16 16:30:42 +08:00
..
config [Feature] Support Out-of-Distribution datasets like ImageNet-A,R,S,C. (#1342) 2023-03-16 16:30:42 +08:00
README.md [Feature] Support Out-of-Distribution datasets like ImageNet-A,R,S,C. (#1342) 2023-03-16 16:30:42 +08:00
generate_imagenet_variant_annotation.py [Feature] Support Out-of-Distribution datasets like ImageNet-A,R,S,C. (#1342) 2023-03-16 16:30:42 +08:00

README.md

Evaluate the fine-tuned model on ImageNet variants

It's a common practice to evaluate the ImageNet-(1K, 21K) fine-tuned model on the ImageNet-1K validation set. This set shares similar data distribution with the training set, but in real world, the inference data is more likely to share different data distribution with the training set. To have a full evaluation of model's performance on out-of-distribution datasets, research community introduces the ImageNet-variant datasets, which shares different data distribution with that of ImageNet-(1K, 21K)., MMClassification supports evaluating the fine-tuned model on ImageNet-Adversarial (A), ImageNet-Rendition (R), ImageNet-Corruption (C), and ImageNet-Sketch (S). You can follow these steps below to have a try:

Prepare the datasets

You can download these datasets from OpenDataLab and refactor these datasets under the data folder in the following format:

   imagenet-a
        ├── meta
        │   └── val.txt
        ├── val
   imagenet-r
        ├── meta
        │   └── val.txt
        ├── val/
   imagenet-s
        ├── meta
        │   └── val.txt
        ├── val/
   imagenet-c
        ├── meta
        │   └── val.txt
        ├── val/

val.txt is the annotation file, which should have the same style as that of ImageNet-1K. You can refer to prepare_dataset to generate the annotation file or you can refer to this script.

Configure the dataset and test evaluator

Once the dataset is ready, you need to configure the dataset and test_evaluator. You have two options to write the default settings:

1. Change the configuration file directly

There are few modifications to the config file, but change the data_root of the test dataloader and pass the annotation file to the test_evaluator.

# You should replace imagenet-x below with imagenet-c, imagenet-r, imagenet-a
# or imagenet-s
test_dataloader=dict(dataset=dict(data_root='data/imagenet-x'))
test_evaluator=dict(ann_file='data/imagenet-x/meta/val.txt')

2. Overwrite the default settings from command line

For example, you can overwrite the default settings by passing --cfg-options:

--cfg-options test_dataloader.dataset.data_root='data/imagenet-x' \
              test_evaluator.ann_file='data/imagenet-x/meta/val.txt'

Start test

This step is the common test step, you can follow this guide to evaluate your fine-tuned model on out-of-distribution datasets.

To make it easier, we also provide an off-the-shelf config files, for ImageNet-C and ImageNet-C, and you can have a try.