mmselfsup/projects/dino
Shawn_ c920d976e7
[Feature]: Add dino (#658)
* [Feature]: Add the scaffold

* Code camp

* Update dino.py

* add loss

* Update dino_neck.py

* data_pipeline

* Update imagenet_dino.py

* [Feature]: Add dino neck

* [Feature]: Add dino neck

* [Feature]: Add teacher temp update hook

* [Feature]: Add dino algorithm

* [Feature]: Add Transform

* [Feature]: Add init

* [Feature]: Forward DINO

* [Feature]: Add DINO

* [Fix]: Delete dino dataset

* [Feature]: Add docstring

* [Feature]: Add readme

* [Fix]: Fix reviews

* [Fix]: Fix lint

---------

Co-authored-by: YuanLiuuuuuu <3463423099@qq.com>
2023-03-28 11:07:16 +08:00
..
config [Feature]: Add dino (#658) 2023-03-28 11:07:16 +08:00
dataset [Feature]: Add dino (#658) 2023-03-28 11:07:16 +08:00
engine [Feature]: Add dino (#658) 2023-03-28 11:07:16 +08:00
models [Feature]: Add dino (#658) 2023-03-28 11:07:16 +08:00
tools [Feature]: Add dino (#658) 2023-03-28 11:07:16 +08:00
README.md [Feature]: Add dino (#658) 2023-03-28 11:07:16 +08:00

README.md

Implementation for DINO

NOTE: We only guarantee correctness of the forward pass, not responsible for full reimplementation.

First, ensure you are in the root directory of MMSelfSup, then you have two choices to play with DINO in MMSelfSup:

Slurm

If you are using a cluster managed by Slurm, you can use the following command to start your job:

GPUS_PER_NODE=8 GPUS=8 CPUS_PER_TASK=16 bash projects/dino/tools/slurm_train.sh mm_model dino projects/dino/config/dino_vit-base-p16_8xb64-amp-coslr-100e_in1k.py --amp

The above command will pre-train the model on a single node with 8 GPUs.

PyTorch

If you are using a single machine, without any cluster management software, you can use the following command

NNODES=1 bash projects/dino/tools/dist_train.sh projects/dino/config/dino_vit-base-p16_8xb64-amp-coslr-100e_in1k.py 8
--amp