## MoCo v3 Reference Setups and Models
Here we document the reference commands for pre-training and evaluating various MoCo v3 models.
### ResNet-50 models
With batch 4096, the training of all ResNet-50 models can fit into 2 nodes with a total of 16 Volta 32G GPUs.
ResNet-50, 100-epoch pre-training.
On the first node, run:
```
python main_moco.py \
--moco-m-cos --crop-min=.2 \
--dist-url 'tcp://[your first node address]:[specified port]' \
--multiprocessing-distributed --world-size 2 --rank 0 \
[your imagenet-folder with train and val folders]
```
On the second node, run the same command with `--rank 1`.
ResNet-50, 300-epoch pre-training.
On the first node, run:
```
python main_moco.py \
--lr=.3 --epochs=300 \
--moco-m-cos --crop-min=.2 \
--dist-url 'tcp://[your first node address]:[specified port]' \
--multiprocessing-distributed --world-size 2 --rank 0 \
[your imagenet-folder with train and val folders]
```
On the second node, run the same command with `--rank 1`.
ResNet-50, 1000-epoch pre-training.
On the first node, run:
```
python main_moco.py \
--lr=.3 --wd=1.5e-6 --epochs=1000 \
--moco-m=0.996 --moco-m-cos --crop-min=.2 \
--dist-url 'tcp://[your first node address]:[specified port]' \
--multiprocessing-distributed --world-size 2 --rank 0 \
[your imagenet-folder with train and val folders]
```
On the second node, run the same command with `--rank 1`.
ResNet-50, linear classification.
Run on single node:
```
python main_lincls.py \
--dist-url 'tcp://localhost:10001' \
--multiprocessing-distributed --world-size 1 --rank 0 \
--pretrained [your checkpoint path]/[your checkpoint file].pth.tar \
[your imagenet-folder with train and val folders]
```
pretrain epochs |
linear acc |
pretrain files |
linear files |
---|---|---|---|
100 | 68.9 | chpt | chpt / log |
300 | 72.8 | chpt | chpt / log |
1000 | 74.6 | chpt | chpt / log |
model | pretrain epochs |
linear acc |
pretrain files |
linear files |
---|---|---|---|---|
ViT-Small | 300 | 73.2 | chpt | chpt / log |
ViT-Base | 300 | 76.7 | chpt | chpt / log |