Update docs.

pull/1445/head
mzr1996 2023-03-20 16:03:57 +08:00
parent f6b65fcbe7
commit 6038df9514
3 changed files with 14 additions and 14 deletions

View File

@ -68,7 +68,7 @@ If we want to use the automatic mixed precision training, we can simply change t
optim_wrapper = dict(type='AmpOptimWrapper', optimizer=...)
```
Alternatively, for conveniency, we can set `--amp` parameter to turn on the AMP option directly in the `tools/train.py` script. Refers to [Training and test](../user_guides/train_test.md) tutorial for details of starting a training.
Alternatively, for conveniency, we can set `--amp` parameter to turn on the AMP option directly in the `tools/train.py` script. Refers to [Training tutorial](../user_guides/train.md) for details of starting a training.
### Parameter-wise finely configuration

View File

@ -39,4 +39,4 @@ result = inference_model(model, img_path)
{"pred_label":65,"pred_score":0.6649366617202759,"pred_class":"sea snake", "pred_scores": [..., 0.6649366617202759, ...]}
```
An image demo can be found in [demo/image_demo.py](https://github.com/open-mmlab/mmpretrain/blob/main/demo/image_demo.py).
An image demo can be found in [demo/image_demo.py](https://github.com/open-mmlab/mmclassification/blob/pretrain/demo/image_demo.py).

View File

@ -46,11 +46,11 @@ We provide a shell script to start a multi-GPUs task with `torch.distributed.lau
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [PY_ARGS]
```
| ARGS | Description |
| ------------- | ------------------------------------------------------------------------------------- |
| `CONFIG_FILE` | The path to the config file. |
| `GPU_NUM` | The number of GPUs to be used. |
| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#training-with-your-pc). |
| ARGS | Description |
| ------------- | ---------------------------------------------------------------------------------- |
| `CONFIG_FILE` | The path to the config file. |
| `GPU_NUM` | The number of GPUs to be used. |
| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#train-with-your-pc). |
You can also specify extra arguments of the launcher by environment variables. For example, change the
communication port of the launcher to 29666 by the below command:
@ -106,13 +106,13 @@ If you run MMPretrain on a cluster managed with [slurm](https://slurm.schedmd.co
Here are the arguments description of the script.
| ARGS | Description |
| ------------- | ------------------------------------------------------------------------------------- |
| `PARTITION` | The partition to use in your cluster. |
| `JOB_NAME` | The name of your job, you can name it as you like. |
| `CONFIG_FILE` | The path to the config file. |
| `WORK_DIR` | The target folder to save logs and checkpoints. |
| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#training-with-your-pc). |
| ARGS | Description |
| ------------- | ---------------------------------------------------------------------------------- |
| `PARTITION` | The partition to use in your cluster. |
| `JOB_NAME` | The name of your job, you can name it as you like. |
| `CONFIG_FILE` | The path to the config file. |
| `WORK_DIR` | The target folder to save logs and checkpoints. |
| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#train-with-your-pc). |
Here are the environment variables can be used to configure the slurm job.