150 lines
6.1 KiB
Markdown
150 lines
6.1 KiB
Markdown
# Getting Started
|
|
|
|
- [Getting Started](#getting-started)
|
|
- [Train existing methods](#train-existing-methods)
|
|
- [Train with single/multiple GPUs](#train-with-singlemultiple-gpus)
|
|
- [Train with multiple machines](#train-with-multiple-machines)
|
|
- [Launch multiple jobs on a single machine](#launch-multiple-jobs-on-a-single-machine)
|
|
- [Benchmarks](#benchmarks)
|
|
- [Tools and Tips](#tools-and-tips)
|
|
- [Count number of parameters](#count-number-of-parameters)
|
|
- [Publish a model](#publish-a-model)
|
|
- [Use t-SNE](#use-t-sne)
|
|
- [Reproducibility](#reproducibility)
|
|
|
|
This page provides basic tutorials about the usage of MMSelfSup. For installation instructions, please see [install.md](install.md).
|
|
|
|
## Train existing methods
|
|
|
|
**Note**: The default learning rate in config files is for 8 GPUs. If using different number GPUs, the total batch size will change in proportion, you have to scale the learning rate following `new_lr = old_lr * new_ngpus / old_ngpus`. We recommend to use `tools/dist_train.sh` even with 1 gpu, since some methods do not support non-distributed training.
|
|
|
|
### Train with single/multiple GPUs
|
|
|
|
```shell
|
|
bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS} --work_dir ${YOUR_WORK_DIR} [optional arguments]
|
|
```
|
|
|
|
Optional arguments are:
|
|
|
|
- `--resume_from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.
|
|
- `--deterministic`: Switch on "deterministic" mode which slows down training but the results are reproducible.
|
|
|
|
An example:
|
|
|
|
```shell
|
|
# checkpoints and logs saved in WORK_DIR=work_dirs/selfsup/odc/odc_resnet50_8xb64-steplr-440e_in1k/
|
|
bash tools/dist_train.sh configs/selfsup/odc/odc_resnet50_8xb64-steplr-440e_in1k.py 8 --work_dir work_dirs/selfsup/odc/odc_resnet50_8xb64-steplr-440e_in1k/
|
|
```
|
|
|
|
**Note**: During training, checkpoints and logs are saved in the same folder structure as the config file under `work_dirs/`. Custom work directory is not recommended since evaluation scripts infer work directories from the config file name. If you want to save your weights somewhere else, please use symlink, for example:
|
|
|
|
```shell
|
|
ln -s ${YOUR_WORK_DIRS} ${MMSELFSUP}/work_dirs
|
|
```
|
|
|
|
Alternatively, if you run MMSelfSup on a cluster managed with [slurm](https://slurm.schedmd.com/):
|
|
|
|
```shell
|
|
GPUS_PER_NODE=${GPUS_PER_NODE} GPUS=${GPUS} SRUN_ARGS=${SRUN_ARGS} bash tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${YOUR_WORK_DIR} [optional arguments]
|
|
```
|
|
|
|
An example:
|
|
|
|
```shell
|
|
GPUS_PER_NODE=8 GPUS=8 bash tools/srun_train.sh Dummy Test_job configs/selfsup/odc/odc_resnet50_8xb64-steplr-440e_in1k.py work_dirs/selfsup/odc/odc_resnet50_8xb64-steplr-440e_in1k/
|
|
```
|
|
|
|
### Train with multiple machines
|
|
|
|
If you launch with multiple machines simply connected with ethernet, you have to modify `tools/dist_train.sh` or create a new script, please refer to PyTorch [Launch utility](https://pytorch.org/docs/stable/distributed.html#launch-utility). Usually it is slow if you do not have high speed networking like InfiniBand.
|
|
|
|
If you launch with slurm, the command is the same as that on single machine described above, but you need refer to [slurm_train.sh](../tools/slurm_train.sh) to set appropriate parameters and environment variables.
|
|
|
|
### Launch multiple jobs on a single machine
|
|
|
|
If you launch multiple jobs on a single machine, e.g., 2 jobs of 4-GPU training on a machine with 8 GPUs, you need to specify different ports (29500 by default) for each job to avoid communication conflict.
|
|
|
|
If you use `dist_train.sh` to launch training jobs:
|
|
|
|
```shell
|
|
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 bash tools/dist_train.sh ${CONFIG_FILE} 4 --work_dir tmp_work_dir_1
|
|
CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 bash tools/dist_train.sh ${CONFIG_FILE} 4 --work_dir tmp_work_dir_2
|
|
```
|
|
|
|
If you use launch training jobs with slurm, you have two options to set different communication ports:
|
|
|
|
Option 1:
|
|
|
|
In `config1.py`:
|
|
|
|
```python
|
|
dist_params = dict(backend='nccl', port=29500)
|
|
```
|
|
|
|
In `config2.py`:
|
|
|
|
```python
|
|
dist_params = dict(backend='nccl', port=29501)
|
|
```
|
|
|
|
Then you can launch two jobs with config1.py and config2.py.
|
|
|
|
```shell
|
|
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 bash tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1
|
|
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 bash tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2
|
|
```
|
|
|
|
Option 2:
|
|
|
|
You can set different communication ports without the need to modify the configuration file, but have to set the `cfg-options` to overwrite the default port in configuration file.
|
|
|
|
```shell
|
|
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 bash tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1 --cfg-options dist_params.port=29500
|
|
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 bash tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2 --cfg-options dist_params.port=29501
|
|
```
|
|
|
|
## Benchmarks
|
|
|
|
We also provide commands to evaluate your pre-trained model on several downstream task, and you can refer to [Benchmarks](./tutorials/6_benchmarks.md) for the details.
|
|
|
|
## Tools and Tips
|
|
|
|
### Count number of parameters
|
|
|
|
```shell
|
|
python tools/analysis_tools/count_parameters.py ${CONFIG_FILE}
|
|
```
|
|
|
|
### Publish a model
|
|
|
|
Before you publish a model, you may want to
|
|
|
|
- Convert model weights to CPU tensors.
|
|
- Delete the optimizer states.
|
|
- Compute the hash of the checkpoint file and append the hash id to the filename.
|
|
|
|
```shell
|
|
python tools/model_converters/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
|
|
```
|
|
|
|
|
|
### Use t-SNE
|
|
|
|
We provide an off-the-shelf tool to visualize the quality of image representations by t-SNE.
|
|
|
|
```shell
|
|
python tools/analysis_tools/visualize_tsne.py ${CONFIG_FILE} --checkpoint ${CKPT_PATH} --work_dir ${WORK_DIR} [optional arguments]
|
|
```
|
|
|
|
Arguments:
|
|
|
|
- `CONFIG_FILE`: config file for the pre-trained model.
|
|
- `CKPT_PATH`: the path of model's checkpoint.
|
|
- `WORK_DIR`: the directory to save the results of visualization.
|
|
- `[optional arguments]`: for optional arguments, you can refer to [visualize_tsne.py](../tools/analysis_tools/visualize_tsne.py)
|
|
|
|
|
|
### Reproducibility
|
|
|
|
If you want to make your performance exactly reproducible, please switch on `--deterministic` to train the final model to be published. Note that this flag will switch off `torch.backends.cudnn.benchmark` and slow down the training speed.
|