mmpretrain/projects/dino
LALBJ d2ccc44a2c
[CodeCamp2023-584]Support DINO self-supervised learning in project (#1756)
* feat: impelemt DINO

* chore: delete debug code

* chore: impplement pre-commit

* fix: fix imported package

* chore: pre-commit check
2023-08-23 10:45:18 +08:00
..
config [CodeCamp2023-584]Support DINO self-supervised learning in project (#1756) 2023-08-23 10:45:18 +08:00
dataset [CodeCamp2023-584]Support DINO self-supervised learning in project (#1756) 2023-08-23 10:45:18 +08:00
engine [CodeCamp2023-584]Support DINO self-supervised learning in project (#1756) 2023-08-23 10:45:18 +08:00
models [CodeCamp2023-584]Support DINO self-supervised learning in project (#1756) 2023-08-23 10:45:18 +08:00
tools [CodeCamp2023-584]Support DINO self-supervised learning in project (#1756) 2023-08-23 10:45:18 +08:00
README.md [CodeCamp2023-584]Support DINO self-supervised learning in project (#1756) 2023-08-23 10:45:18 +08:00

README.md

Implementation for DINO

NOTE: We only guarantee correctness of the forward pass, not responsible for full reimplementation.

First, ensure you are in the root directory of MMPretrain, then you have two choices to play with DINO in MMPretrain:

Slurm

If you are using a cluster managed by Slurm, you can use the following command to start your job:

GPUS_PER_NODE=8 GPUS=8 CPUS_PER_TASK=16 bash projects/dino/tools/slurm_train.sh mm_model dino projects/dino/config/dino_vit-base-p16_8xb64-amp-coslr-100e_in1k.py --amp

The above command will pre-train the model on a single node with 8 GPUs.

PyTorch

If you are using a single machine, without any cluster management software, you can use the following command

NNODES=1 bash projects/dino/tools/dist_train.sh projects/dino/config/dino_vit-base-p16_8xb64-amp-coslr-100e_in1k.py 8
--amp