9.4 KiB
Dummy MAE Wrapper
This is an example README for community projects/
. We have provided detailed explanations for each field in the form of html comments, which are visible when you read the source of this README file. If you wish to submit your project to our main repository, then all the fields in this README are mandatory for others to understand what you have achieved in this implementation.
Description
This project implements a dummy MAE wrapper, which prints "Welcome to MMSelfSup" during initialization.
Usage
Setup Environment
Please refer to Get Started documentation of MMSelfSup.
Data Preparation
To show the dataset directory or provide the commands for dataset preparation if needed.
For example:
data/
└── imagenet
├── train
├── val
└── meta
├── train.txt
└── val.txt
Pre-training Commands
At first, you need to add the current folder to PYTHONPATH
, so that Python can find your model files. In example_project/
root directory, please run command below to add it.
export PYTHONPATH=`pwd`:$PYTHONPATH
Then run the following commands to train the model:
On Local Single GPU
mim train mmselfsup $CONFIG --work-dir $WORK_DIR
# a specific command example
mim train mmselfsup configs/dummy-mae_vit-base-p16_8xb512-amp-coslr-300e_in1k.py \
--work-dir work_dirs/dummy_mae/
On Multiple GPUs
# a specific command examples, 8 GPUs here
mim train mmselfsup configs/dummy-mae_vit-base-p16_8xb512-amp-coslr-300e_in1k.py \
--work-dir work_dirs/dummy_mae/ \
--launcher pytorch --gpus 8
Note:
- CONFIG: the config files under the directory
configs/
- WORK_DIR: the working directory to save configs, logs, and checkpoints
On Multiple GPUs with Slurm
# a specific command example: 16 GPUs in 2 nodes
mim train mmselfsup configs/dummy-mae_vit-base-p16_8xb512-amp-coslr-300e_in1k.py \
--work-dir work_dirs/dummy_mae/ \
--launcher slurm --gpus 16 --gpus-per-node 8 \
--partition $PARTITION
Note:
- CONFIG: the config files under the directory
configs/
- WORK_DIR: the working directory to save configs, logs, and checkpoints
- PARTITION: the slurm partition you are using
Downstream Tasks Commands
In MMSelfSup's root directory, run the following command to train the downstream model:
mim train mmcls $CONFIG \
--work-dir $WORK_DIR \
--launcher pytorch -gpus 8 \
[optional args]
# a specific command example
mim train mmcls configs/vit-base-p16_ft-8xb128-coslr-100e_in1k.py \
--work-dir work_dirs/dummy_mae/classification/
--launcher pytorch -gpus 8 \
--cfg-options model.backbone.init_cfg.type=Pretrained \
model.backbone.init_cfg.checkpoint=https://download.openmmlab.com/mmselfsup/1.x/mae/mae_vit-base-p16_8xb512-fp16-coslr-300e_in1k/mae_vit-base-p16_8xb512-coslr-300e-fp16_in1k_20220829-c2cf66ba.pth \
model.backbone.init_cfg.prefix="backbone." \
$PY_ARGS
Note:
- CONFIG: the config files under the directory
configs/
- WORK_DIR: the working directory to save configs, logs, and checkpoints
- CHECKPOINT: the pretrained checkpoint of MMSelfSup saved in working directory, like
$WORK_DIR/epoch_300.pth
- PY_ARGS: other optional args
Results
If you have any downstream task results, you could list them here.
For example:
The Linear Eval and Fine-tuning results are based on ImageNet dataset.
Algorithm | Backbone | Epoch | Batch Size | Linear Eval | Fine-tuning |
---|---|---|---|---|---|
MAE | ViT-base | 300 | 4096 | 60.8 | 83.1 |
Citation
@misc{mmselfsup2021,
title={{MMSelfSup}: OpenMMLab Self-Supervised Learning Toolbox and Benchmark},
author={MMSelfSup Contributors},
howpublished={\url{https://github.com/open-mmlab/mmselfsup}},
year={2021}
}
Checklist
Here is a checklist illustrating a usual development workflow of a successful project, and also serves as an overview of this project's progress.
-
Milestone 1: PR-ready, and acceptable to be one of the
projects/
.-
Finish the code
-
Basic docstrings & proper citation
-
Inference correctness
-
A full README
-
-
Milestone 2: Indicates a successful model implementation.
-
Training-time correctness
-
-
Milestone 3: Good to be a part of our core package!
-
Type hints and docstrings
-
Unit tests
-
Code polishing
-
Metafile.yml and README.md
-
-
Refactor and Move your modules into the core package following the codebase's file hierarchy structure.