mmselfsup/projects/example_project
Yixiao Fang 6db04339a6
[Community] Add './projects' folder and an example (#586)
* add projects folder and readme

* refine description

* fix lint

* update ci

* update

* refine docs
2022-11-23 20:42:31 +08:00
..

Dummy MAE Wrapper

This is an example README for community projects/. We have provided detailed explanations for each field in the form of html comments, which are visible when you read the source of this README file. If you wish to submit your project to our main repository, then all the fields in this README are mandatory for others to understand what you have achieved in this implementation.

Description

This project implements a dummy MAE wrapper, which prints "Welcome to MMSelfSup" during initialization.

Usage

Pre-training commands

In MMSelfSup's root directory, run the following command to train the model:

python tools/train.py projects/example_project/configs/dummy-mae_vit-base-p16_8xb512-amp-coslr-300e_in1k.py

Downstream tasks commands

In MMSelfSup's root directory, run the following command to train the downstream model:

sh tools/benchmarks/classification/mim_dist_train.sh ${CONFIGS} ${CHECKPOINT} [optional args]

# the example of custom command
GPUS=1 sh tools/benchmarks/classification/mim_dist_train.sh projects/example_projects/configs/xxx.py ${CHECKPOINT} --work-dir work_dirs/example_projects/classification/

Results

If you have any downstream task results, you could list them here.

For example:

The Linear Eval and Fine-tuning results are based on ImageNet dataset.

Algorithm Backbone Epoch Batch Size Linear Eval Fine-tuning
MAE ViT-base 300 4096 60.8 83.1

Citation

@misc{mmselfsup2021,
    title={{MMSelfSup}: OpenMMLab Self-Supervised Learning Toolbox and Benchmark},
    author={MMSelfSup Contributors},
    howpublished={\url{https://github.com/open-mmlab/mmselfsup}},
    year={2021}
}

Checklist

  • Milestone 1: PR-ready, and acceptable to be one of the projects/.

    • Finish the code

    • Basic docstrings & proper citation

    • Inference correctness

    • A full README

  • Milestone 2: Indicates a successful model implementation.

    • Training-time correctness

  • Milestone 3: Good to be a part of our core package!

    • Type hints and docstrings

    • Unit tests

    • Code polishing

    • Metafile.yml and README.md

    • Refactor and Move your modules into the core package following the codebase's file hierarchy structure.