mmselfsup/docs/en/prepare_data.md
Yixiao Fang bbdf670d00
Bump version to v0.9.1 (#322)
* [Fix]: Set qkv bias to False for cae and True for mae (#303)

* [Fix]: Add mmcls transformer layer choice

* [Fix]: Fix transformer encoder layer bug

* [Fix]: Change UT of cae

* [Feature]: Change the file name of cosine annealing hook (#304)

* [Feature]: Change cosine annealing hook file name

* [Feature]: Add UT for cosine annealing hook

* [Fix]: Fix lint

* read tutorials and fix typo (#308)

* [Fix] fix config errors in MAE (#307)

* update readthedocs algorithm readme (#310)

* [Docs] Replace markdownlint with mdformat (#311)

* Replace markdownlint with mdformat to avoid installing ruby

* fix typo

* add 'ba' to codespell ignore-words-list

* Configure Myst-parser to parse anchor tag (#309)

* [Docs] rewrite install.md (#317)

* rewrite the install.md

* add faq.md

* fix lint

* add FAQ to README

* add Chinese version

* fix typo

* fix format

* remove modification

* fix format

* [Docs] refine README.md file (#318)

* refine README.md file

* fix lint

* format language button

* rename getting_started.md

* revise index.rst

* add model_zoo.md to index.rst

* fix lint

* refine readme

Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com>

* [Enhance] update byol models and results (#319)

* Update version information (#321)

Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com>
Co-authored-by: Yi Lu <21515006@zju.edu.cn>
Co-authored-by: RenQin <45731309+soonera@users.noreply.github.com>
Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com>
2022-06-01 09:59:05 +08:00

3.4 KiB

Prepare Datasets

MMSelfSup supports multiple datasets. Please follow the corresponding guidelines for data preparation. It is recommended to symlink your dataset root to $MMSELFSUP/data. If your folder structure is different, you may need to change the corresponding paths in config files.

mmselfsup
├── mmselfsup
├── tools
├── configs
├── docs
├── data
│   ├── imagenet
│   │   ├── meta
│   │   ├── train
│   │   ├── val
│   ├── places205
│   │   ├── meta
│   │   ├── train
│   │   ├── val
│   ├── inaturalist2018
│   │   ├── meta
│   │   ├── train
│   │   ├── val
│   ├── VOCdevkit
│   │   ├── VOC2007
│   ├── cifar
│   │   ├── cifar-10-batches-py

Prepare ImageNet

For ImageNet, it has multiple versions, but the most commonly used one is ILSVRC 2012. It can be accessed with the following steps:

  1. Register an account and login to the download page
  2. Find download links for ILSVRC2012 and download the following two files
    • ILSVRC2012_img_train.tar (~138GB)
    • ILSVRC2012_img_val.tar (~6.3GB)
  3. Untar the downloaded files
  4. Download meta data using this script

Prepare Place205

For Places205, you need to:

  1. Register an account and login to the download page
  2. Download the resized images and the image list of train set and validation set of Places205
  3. Untar the downloaded files

Prepare iNaturalist2018

For iNaturalist2018, you need to:

  1. Download the training and validation images and annotations from the download page
  2. Untar the downloaded files
  3. Convert the original json annotation format to the list format using the script tools/data_converters/convert_inaturalist.py

Prepare PASCAL VOC

Assuming that you usually store datasets in $YOUR_DATA_ROOT. The following command will automatically download PASCAL VOC 2007 into $YOUR_DATA_ROOT, prepare the required files, create a folder data under $MMSELFSUP and make a symlink VOCdevkit.

bash tools/data_converters/prepare_voc07_cls.sh $YOUR_DATA_ROOT

Prepare CIFAR10

CIFAR10 will be downloaded automatically if it is not found. In addition, dataset implemented by MMSelfSup will also automatically structure CIFAR10 to the appropriate format.

Prepare datasets for detection and segmentation

Detection

To prepare COCO, VOC2007 and VOC2012 for detection, you can refer to mmdet.

Segmentation

To prepare VOC2012AUG and Cityscapes for segmentation, you can refer to mmseg