* [CodeCamp2023-584]Support DINO self-supervised learning in project (#1756) * feat: impelemt DINO * chore: delete debug code * chore: impplement pre-commit * fix: fix imported package * chore: pre-commit check * [CodeCamp2023-340] New Version of config Adapting MobileNet Algorithm (#1774) * add new config adapting MobileNetV2,V3 * add base model config for mobile net v3, modified all training configs of mobile net v3 inherit from the base model config * removed directory _base_/models/mobilenet_v3 * [Feature] Implement of Zero-Shot CLIP Classifier (#1737) * zero-shot CLIP * modify zero-shot clip config * add in1k_sub_prompt(8 prompts) for improvement * add some annotations doc * clip base class & clip_zs sub-class * some modifications of details after review * convert into and use mmpretrain-vit * modify names of some files and directories * ram init commit * [Fix] Fix pipeline bug in image retrieval inferencer * [CodeCamp2023-341] 多模态数据集文档补充-COCO Retrieval * Update OFA to compat with latest huggingface. * Update train.py to compat with new config * Bump version to v1.1.0 * Update __init__.py --------- Co-authored-by: LALBJ <40877073+LALBJ@users.noreply.github.com> Co-authored-by: DE009 <57087096+DE009@users.noreply.github.com> Co-authored-by: mzr1996 <mzr1996@163.com> Co-authored-by: 飞飞 <102729089+ASHORE1225@users.noreply.github.com> |
||
---|---|---|
.. | ||
config | ||
dataset | ||
engine | ||
models | ||
tools | ||
README.md |
README.md
Implementation for DINO
NOTE: We only guarantee correctness of the forward pass, not responsible for full reimplementation.
First, ensure you are in the root directory of MMPretrain, then you have two choices to play with DINO in MMPretrain:
Slurm
If you are using a cluster managed by Slurm, you can use the following command to start your job:
GPUS_PER_NODE=8 GPUS=8 CPUS_PER_TASK=16 bash projects/dino/tools/slurm_train.sh mm_model dino projects/dino/config/dino_vit-base-p16_8xb64-amp-coslr-100e_in1k.py --amp
The above command will pre-train the model on a single node with 8 GPUs.
PyTorch
If you are using a single machine, without any cluster management software, you can use the following command
NNODES=1 bash projects/dino/tools/dist_train.sh projects/dino/config/dino_vit-base-p16_8xb64-amp-coslr-100e_in1k.py 8
--amp