From 6038df9514587386d512e11cbb226cf3748e2733 Mon Sep 17 00:00:00 2001 From: mzr1996 Date: Mon, 20 Mar 2023 16:03:57 +0800 Subject: [PATCH] Update docs. --- docs/en/advanced_guides/schedule.md | 2 +- docs/en/user_guides/inference.md | 2 +- docs/en/user_guides/train.md | 24 ++++++++++++------------ 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/docs/en/advanced_guides/schedule.md b/docs/en/advanced_guides/schedule.md index 6ea992db..ccee84a0 100644 --- a/docs/en/advanced_guides/schedule.md +++ b/docs/en/advanced_guides/schedule.md @@ -68,7 +68,7 @@ If we want to use the automatic mixed precision training, we can simply change t optim_wrapper = dict(type='AmpOptimWrapper', optimizer=...) ``` -Alternatively, for conveniency, we can set `--amp` parameter to turn on the AMP option directly in the `tools/train.py` script. Refers to [Training and test](../user_guides/train_test.md) tutorial for details of starting a training. +Alternatively, for conveniency, we can set `--amp` parameter to turn on the AMP option directly in the `tools/train.py` script. Refers to [Training tutorial](../user_guides/train.md) for details of starting a training. ### Parameter-wise finely configuration diff --git a/docs/en/user_guides/inference.md b/docs/en/user_guides/inference.md index 77bd3cdd..fd0b29bf 100644 --- a/docs/en/user_guides/inference.md +++ b/docs/en/user_guides/inference.md @@ -39,4 +39,4 @@ result = inference_model(model, img_path) {"pred_label":65,"pred_score":0.6649366617202759,"pred_class":"sea snake", "pred_scores": [..., 0.6649366617202759, ...]} ``` -An image demo can be found in [demo/image_demo.py](https://github.com/open-mmlab/mmpretrain/blob/main/demo/image_demo.py). +An image demo can be found in [demo/image_demo.py](https://github.com/open-mmlab/mmclassification/blob/pretrain/demo/image_demo.py). diff --git a/docs/en/user_guides/train.md b/docs/en/user_guides/train.md index 494e2071..c8465ea0 100644 --- a/docs/en/user_guides/train.md +++ b/docs/en/user_guides/train.md @@ -46,11 +46,11 @@ We provide a shell script to start a multi-GPUs task with `torch.distributed.lau bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [PY_ARGS] ``` -| ARGS | Description | -| ------------- | ------------------------------------------------------------------------------------- | -| `CONFIG_FILE` | The path to the config file. | -| `GPU_NUM` | The number of GPUs to be used. | -| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#training-with-your-pc). | +| ARGS | Description | +| ------------- | ---------------------------------------------------------------------------------- | +| `CONFIG_FILE` | The path to the config file. | +| `GPU_NUM` | The number of GPUs to be used. | +| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#train-with-your-pc). | You can also specify extra arguments of the launcher by environment variables. For example, change the communication port of the launcher to 29666 by the below command: @@ -106,13 +106,13 @@ If you run MMPretrain on a cluster managed with [slurm](https://slurm.schedmd.co Here are the arguments description of the script. -| ARGS | Description | -| ------------- | ------------------------------------------------------------------------------------- | -| `PARTITION` | The partition to use in your cluster. | -| `JOB_NAME` | The name of your job, you can name it as you like. | -| `CONFIG_FILE` | The path to the config file. | -| `WORK_DIR` | The target folder to save logs and checkpoints. | -| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#training-with-your-pc). | +| ARGS | Description | +| ------------- | ---------------------------------------------------------------------------------- | +| `PARTITION` | The partition to use in your cluster. | +| `JOB_NAME` | The name of your job, you can name it as you like. | +| `CONFIG_FILE` | The path to the config file. | +| `WORK_DIR` | The target folder to save logs and checkpoints. | +| `[PY_ARGS]` | The other optional arguments of `tools/train.py`, see [here](#train-with-your-pc). | Here are the environment variables can be used to configure the slurm job.