[Docs] translate classification.md, detection.md, segmentation.md (#665)

* Add files via upload

Chinese document translation

* Update docs/zh_cn/user_guides/classification.md

Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com>

* Update docs/zh_cn/user_guides/classification.md

Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com>

* Update classification.md

* Update detection.md

* Update detection.md

* Update segmentation.md

* update

Co-authored-by: Yixiao Fang <36138628+fangyixiao18@users.noreply.github.com>
Co-authored-by: fangyixiao18 <fangyx18@hotmail.com>
pull/669/head
Junlin Chang 2023-01-11 19:48:41 +08:00 committed by GitHub
parent f96b56a4ca
commit 482e9bbc37
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 103 additions and 108 deletions

View File

@ -39,9 +39,9 @@ Remarks:
- `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment. - `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment.
- `${FEATURE_LIST}` is a string to specify features from layer1 to layer5 to evaluate; e.g., if you want to evaluate layer5 only, then `FEATURE_LIST` is "feat5", if you want to evaluate all features, then `FEATURE_LIST` is "feat1 feat2 feat3 feat4 feat5" (separated by space). If left empty, the default `FEATURE_LIST` is "feat5". - `${FEATURE_LIST}` is a string to specify features from layer1 to layer5 to evaluate; e.g., if you want to evaluate layer5 only, then `FEATURE_LIST` is "feat5", if you want to evaluate all features, then `FEATURE_LIST` is "feat1 feat2 feat3 feat4 feat5" (separated by space). If left empty, the default `FEATURE_LIST` is "feat5".
- `PRETRAIN`: the pre-trained model file. - `${PRETRAIN}`: the pre-trained model file.
- if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command. - if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command.
- `EPOCH` is the epoch number of the ckpt that you want to test - `${EPOCH}` is the epoch number of the ckpt that you want to test
## Linear Evaluation and Fine-tuning ## Linear Evaluation and Fine-tuning
@ -68,9 +68,8 @@ bash tools/benchmarks/classification/mim_slurm_train.sh ${PARTITION} ${JOB_NAME}
Remarks: Remarks:
- The default GPU number is 8. When changing GPUS, please also change `samples_per_gpu` in the config file accordingly to ensure the total batch size is 256. - `${CONFIG}`: Use config files under `configs/benchmarks/classification/`. Specifically, `imagenet` (excluding `imagenet_*percent` folders), `places205` and `inaturalist2018`.
- `CONFIG`: Use config files under `configs/benchmarks/classification/`. Specifically, `imagenet` (excluding `imagenet_*percent` folders), `places205` and `inaturalist2018`. - `${PRETRAIN}`: the pre-trained model file.
- `PRETRAIN`: the pre-trained model file.
Example: Example:
@ -92,7 +91,7 @@ bash tools/benchmarks/classification//mim_slurm_test.sh ${PARTITION} ${CONFIG} $
Remarks: Remarks:
- `CHECKPOINT`: The well-trained classification model that you want to test. - `${CHECKPOINT}`: The well-trained classification model that you want to test.
Example: Example:
@ -109,8 +108,8 @@ To run ImageNet semi-supervised classification, we still use the same `.sh` scri
Remarks: Remarks:
- The default GPU number is 4. - The default GPU number is 4.
- `CONFIG`: Use config files under `configs/benchmarks/classification/imagenet/`, named `imagenet_*percent` folders. - `${CONFIG}`: Use config files under `configs/benchmarks/classification/imagenet/`, named `imagenet_*percent` folders.
- `PRETRAIN`: the pre-trained model file. - `${PRETRAIN}`: the pre-trained model file.
## ImageNet Nearest-Neighbor Classification ## ImageNet Nearest-Neighbor Classification
@ -131,7 +130,7 @@ bash tools/benchmarks/classification/knn_imagenet/slurm_test_knn.sh ${PARTITION}
Remarks: Remarks:
- `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment. - `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment.
- `CHECKPOINT`: the path of checkpoint model file. - `${CHECKPOINT}`: the path of checkpoint model file.
- if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command. - if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command.
- `[optional arguments]`: for optional arguments, you can refer to the [script](https://github.com/open-mmlab/mmselfsup/blob/1.x/tools/benchmarks/classification/knn_imagenet/test_knn.py) - `[optional arguments]`: for optional arguments, you can refer to the [script](https://github.com/open-mmlab/mmselfsup/blob/1.x/tools/benchmarks/classification/knn_imagenet/test_knn.py)

View File

@ -31,7 +31,7 @@ bash tools/benchmarks/mmdetection/mim_slurm_train_fpn.sh ${PARTITION} ${CONFIG}
Remarks: Remarks:
- `CONFIG`: Use config files under `configs/benchmarks/mmdetection/`. Since repositories of OpenMMLab have support referring config files across different repositories, we can easily leverage the configs from MMDetection like: - `${CONFIG}`: Use config files under `configs/benchmarks/mmdetection/`. Since repositories of OpenMMLab have support referring config files across different repositories, we can easily leverage the configs from MMDetection like:
```shell ```shell
_base_ = 'mmdet::mask_rcnn/mask-rcnn_r50-caffe-c4_1x_coco.py' _base_ = 'mmdet::mask_rcnn/mask-rcnn_r50-caffe-c4_1x_coco.py'
@ -39,8 +39,8 @@ _base_ = 'mmdet::mask_rcnn/mask-rcnn_r50-caffe-c4_1x_coco.py'
Writing your config files from scratch is also supported. Writing your config files from scratch is also supported.
- `PRETRAIN`: the pre-trained model file. - `${PRETRAIN}`: the pre-trained model file.
- `GPUS`: The number of GPUs that you want to use to train. We adopt 8 GPUs for detection tasks by default. - `${GPUS}`: The number of GPUs that you want to use to train. We adopt 8 GPUs for detection tasks by default.
Example: Example:
@ -74,7 +74,7 @@ bash tools/benchmarks/mmdetection/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHE
Remarks: Remarks:
- `CHECKPOINT`: The well-trained detection model that you want to test. - `${CHECKPOINT}`: The well-trained detection model that you want to test.
Example: Example:

View File

@ -29,7 +29,7 @@ bash tools/benchmarks/mmsegmentation/mim_slurm_train.sh ${PARTITION} ${CONFIG} $
Remarks: Remarks:
- `CONFIG`: Use config files under `configs/benchmarks/mmsegmentation/`. Since repositories of OpenMMLab have support referring config files across different - `${CONFIG}`: Use config files under `configs/benchmarks/mmsegmentation/`. Since repositories of OpenMMLab have support referring config files across different
repositories, we can easily leverage the configs from MMSegmentation like: repositories, we can easily leverage the configs from MMSegmentation like:
```shell ```shell
@ -38,8 +38,8 @@ _base_ = 'mmseg::fcn/fcn_r50-d8_4xb2-40k_cityscapes-769x769.py'
Writing your config files from scratch is also supported. Writing your config files from scratch is also supported.
- `PRETRAIN`: the pre-trained model file. - `${PRETRAIN}`: the pre-trained model file.
- `GPUS`: The number of GPUs that you want to use to train. We adopt 4 GPUs for segmentation tasks by default. - `${GPUS}`: The number of GPUs that you want to use to train. We adopt 4 GPUs for segmentation tasks by default.
Example: Example:
@ -63,7 +63,7 @@ bash tools/benchmarks/mmsegmentation/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${
Remarks: Remarks:
- `CHECKPOINT`: The well-trained segmentation model that you want to test. - `${CHECKPOINT}`: The well-trained segmentation model that you want to test.
Example: Example:

View File

@ -1,19 +1,18 @@
# Classification # 分类
- [Classification](#classification) - [分类](#分类)
- [VOC SVM / Low-shot SVM](#voc-svm--low-shot-svm) - [VOC SVM / Low-shot SVM](#voc-svm--low-shot-svm)
- [Linear Evaluation and Fine-tuning](#linear-evaluation-and-fine-tuning) - [线性评估和微调](#线性评估和微调)
- [ImageNet Semi-Supervised Classification](#imagenet-semi-supervised-classification) - [ImageNet 半监督分类](#imagenet-半监督分类)
- [ImageNet Nearest-Neighbor Classification](#imagenet-nearest-neighbor-classification) - [ImageNet 最近邻分类](#imagenet-最近邻分类)
In MMSelfSup, we provide many benchmarks for classification, thus the models can be evaluated on different classification tasks. Here are comprehensive tutorials and examples to explain how to run all classification benchmarks with MMSelfSup. 在 MMSelfSup 中,我们为分类任务提供了许多基线,因此模型可以在不同分类任务上进行评估。这里有详细的教程和例子来阐述如何使用 MMSelfSup 来运行所有的分类基线。我们在`tools/benchmarks/classification/`文件夹中提供了所有的脚本,包含 2 个`.sh` 文件,一个文件夹用于与 VOC SVM 相关的分类任务,另一个文件夹用于 ImageNet 最近邻分类任务。
We provide scripts in folder `tools/benchmarks/classification/`, which has 2 `.sh` files, 1 folder for VOC SVM related classification task and 1 folder for ImageNet nearest-neighbor classification task.
## VOC SVM / Low-shot SVM ## VOC SVM / Low-shot SVM
To run these benchmarks, you should first prepare your VOC datasets. Please refer to [prepare_data.md](./2_dataset_prepare.md) for the details of data preparation. 为了运行这些基准,您首先应该准备好您的 VOC 数据集。请参考 [prepare_data.md](./2_dataset_prepare.md) 来获取数据准备的详细信息。
To evaluate the pre-trained models, you can run the command below. 为了评估这些预训练的模型, 您可以运行如下指令。
```shell ```shell
# distributed version # distributed version
@ -23,7 +22,7 @@ bash tools/benchmarks/classification/svm_voc07/dist_test_svm_pretrain.sh ${SELFS
bash tools/benchmarks/classification/svm_voc07/slurm_test_svm_pretrain.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${PRETRAIN} ${FEATURE_LIST} bash tools/benchmarks/classification/svm_voc07/slurm_test_svm_pretrain.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${PRETRAIN} ${FEATURE_LIST}
``` ```
Besides, if you want to evaluate the ckpt files saved by runner, you can run the command below. 此外,如果您想评估由 runner 保存的ckpt文件您可以运行如下指令。
```shell ```shell
# distributed version # distributed version
@ -33,30 +32,29 @@ bash tools/benchmarks/classification/svm_voc07/dist_test_svm_epoch.sh ${SELFSUP_
bash tools/benchmarks/classification/svm_voc07/slurm_test_svm_epoch.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${EPOCH} ${FEATURE_LIST} bash tools/benchmarks/classification/svm_voc07/slurm_test_svm_epoch.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${EPOCH} ${FEATURE_LIST}
``` ```
**To test with ckpt, the code uses the epoch\_\*.pth file, there is no need to extract weights.** **使用 ckpt 进行测试,代码使用 epoch\_\*.pth 文件,这里不需要提取权重。**
Remarks: 备注:
- `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment. - `${SELFSUP_CONFIG}` 是自监督实验的配置文件.
- `${FEATURE_LIST}` is a string to specify features from layer1 to layer5 to evaluate; e.g., if you want to evaluate layer5 only, then `FEATURE_LIST` is "feat5", if you want to evaluate all features, then `FEATURE_LIST` is "feat1 feat2 feat3 feat4 feat5" (separated by space). If left empty, the default `FEATURE_LIST` is "feat5". - `${FEATURE_LIST}` 是一个字符串,用于指定从 layer1 到 layer5 的要评估特征;例如,如果您只想评估 layer5那么 `FEATURE_LIST` 是 "feat5",如果您想要评估所有的特征,那么 `FEATURE_LIST` 是 "feat1 feat2 feat3 feat4 feat5" (用空格分隔)。如果为空,那么 `FEATURE_LIST` 默认是 "feat5"。
- `PRETRAIN`: the pre-trained model file. - `${PRETRAIN}`:预训练模型文件。
- if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command. - 如果您想改变 GPU 个数, 您可以在命令的前面加上 `GPUS_PER_NODE=4 GPUS=4`
- `EPOCH` is the epoch number of the ckpt that you want to test - `${EPOCH}` 是您想要测试的 ckpt 的轮数
## Linear Evaluation and Fine-tuning ## 线性评估和微调
Linear evaluation and fine-tuning are two of the most general benchmarks. We provide config files and scripts to launch the training and testing 线性评估和微调是最常见的两个基准。我们为线性评估和微调提供了配置文件和脚本来进行训练和测试。支持的数据集有 **ImageNet****Places205** 和 **iNaturalist18**
for Linear Evaluation and Fine-tuning. The supported datasets are **ImageNet**, **Places205** and **iNaturalist18**.
First, make sure you have installed [MIM](https://github.com/open-mmlab/mim), which is also a project of OpenMMLab. 首先,确保您已经安装 [MIM](https://github.com/open-mmlab/mim),这也是 OpenMMLab 的一个项目.
```shell ```shell
pip install openmim pip install openmim
``` ```
Besides, please refer to MMClassification for [installation](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/install.md) and [data preparation](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/getting_started.md). 此外,请参考 MMClassification 的[安装](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/install.md)和[数据准备](https://github.com/open-mmlab/mmclassification/blob/dev-1.x/docs/en/getting_started.md)。
Then, run the command below. 然后运行如下命令。
```shell ```shell
# distributed version # distributed version
@ -66,13 +64,12 @@ bash tools/benchmarks/classification/mim_dist_train.sh ${CONFIG} ${PRETRAIN}
bash tools/benchmarks/classification/mim_slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG} ${PRETRAIN} bash tools/benchmarks/classification/mim_slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG} ${PRETRAIN}
``` ```
Remarks: 备注:
- The default GPU number is 8. When changing GPUS, please also change `samples_per_gpu` in the config file accordingly to ensure the total batch size is 256. - `${CONFIG}`:使用`configs/benchmarks/classification/`下的配置文件。具体来说,`imagenet` (除了`imagenet_*percent`文件), `places205` and `inaturalist2018`
- `CONFIG`: Use config files under `configs/benchmarks/classification/`. Specifically, `imagenet` (excluding `imagenet_*percent` folders), `places205` and `inaturalist2018`. - `${PRETRAIN}`:预训练模型文件。
- `PRETRAIN`: the pre-trained model file.
Example: 例子:
```shell ```shell
bash ./tools/benchmarks/classification/mim_dist_train.sh \ bash ./tools/benchmarks/classification/mim_dist_train.sh \
@ -80,7 +77,7 @@ configs/benchmarks/classification/imagenet/resnet50_linear-8xb32-coslr-100e_in1k
work_dir/pretrained_model.pth work_dir/pretrained_model.pth
``` ```
If you want to test the well-trained model, please run the command below. 如果您想测试训练好的模型,请运行如下命令。
```shell ```shell
# distributed version # distributed version
@ -90,11 +87,11 @@ bash tools/benchmarks/classification/mim_dist_test.sh ${CONFIG} ${CHECKPOINT}
bash tools/benchmarks/classification//mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT} bash tools/benchmarks/classification//mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT}
``` ```
Remarks: 备注:
- `CHECKPOINT`: The well-trained classification model that you want to test. - `${CHECKPOINT}`:您想测试的训练好的分类模型
Example: 例子:
```shell ```shell
bash ./tools/benchmarks/mmsegmentation/mim_dist_test.sh \ bash ./tools/benchmarks/mmsegmentation/mim_dist_test.sh \
@ -102,23 +99,23 @@ configs/benchmarks/classification/imagenet/resnet50_linear-8xb32-coslr-100e_in1k
work_dir/model.pth work_dir/model.pth
``` ```
## ImageNet Semi-Supervised Classification ## ImageNet 半监督分类
To run ImageNet semi-supervised classification, we still use the same `.sh` script as Linear Evaluation and Fine-tuning to launch training. 为了运行 ImageNet 半监督分类,我们将使用和线性评估和微调一样的`.sh`脚本进行训练。
Remarks: 备注:
- The default GPU number is 4. - 默认GPU数量是4.
- `CONFIG`: Use config files under `configs/benchmarks/classification/imagenet/`, named `imagenet_*percent` folders. - `${CONFIG}`:使用`configs/benchmarks/classification/imagenet/`下的配置文件,命名为`imagenet_*percent`的文件。
- `PRETRAIN`: the pre-trained model file. - `${PRETRAIN}`:预训练模型文件。
## ImageNet Nearest-Neighbor Classification ## ImageNet 最近邻分类
```Note ```备注
Only support CNN-style backbones (like ResNet50). 仅支持 CNN 形式的主干网络 (例如 ResNet50)。
``` ```
To evaluate the pre-trained models using the nearest-neighbor benchmark, you can run the command below. 为评估用于 ImageNet 最近邻分类基准的预训练模型,您可以运行如下命令。
```shell ```shell
# distributed version # distributed version
@ -128,14 +125,14 @@ bash tools/benchmarks/classification/knn_imagenet/dist_test_knn.sh ${SELFSUP_CON
bash tools/benchmarks/classification/knn_imagenet/slurm_test_knn.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${CHECKPOINT} [optional arguments] bash tools/benchmarks/classification/knn_imagenet/slurm_test_knn.sh ${PARTITION} ${JOB_NAME} ${SELFSUP_CONFIG} ${CHECKPOINT} [optional arguments]
``` ```
Remarks: 备注:
- `${SELFSUP_CONFIG}` is the config file of the self-supervised experiment. - `${SELFSUP_CONFIG}`:是自监督实验的配置文件。
- `CHECKPOINT`: the path of checkpoint model file. - `${CHECKPOINT}`:检查点模型文件的路径。
- if you want to change GPU numbers, you could add `GPUS_PER_NODE=4 GPUS=4` at the beginning of the command. - 如果您想改变GPU的数量您可以在命令的前面加上`GPUS_PER_NODE=4 GPUS=4`。
- `[optional arguments]`: for optional arguments, you can refer to the [script](https://github.com/open-mmlab/mmselfsup/blob/1.x/tools/benchmarks/classification/knn_imagenet/test_knn.py) - `[optional arguments]`:用于可选参数,您可以参考这个[脚本](https://github.com/open-mmlab/mmselfsup/blob/1.x/tools/benchmarks/classification/knn_imagenet/test_knn.py)
An example of command 命令的一个例子
```shell ```shell
# distributed version # distributed version

View File

@ -1,23 +1,23 @@
# Detection # 检测
- [Detection](#detection) - [检测](#检测)
- [Train](#train) - [训练](#训练)
- [Test](#test) - [测试](#测试)
Here, we prefer to use MMDetection to do the detection task. First, make sure you have installed [MIM](https://github.com/open-mmlab/mim), which is also a project of OpenMMLab. 这里,我们倾向使用 MMDetection 做检测任务。首先确保您已经安装了 [MIM](https://github.com/open-mmlab/mim),这也是 OpenMMLab 的一个项目。
```shell ```shell
pip install openmim pip install openmim
mim install 'mmdet>=3.0.0rc0' mim install 'mmdet>=3.0.0rc0'
``` ```
It is very easy to install the package. 非常容易安装这个包。
Besides, please refer to MMDet for [installation](https://mmdetection.readthedocs.io/en/dev-3.x/get_started.html) and [data preparation](https://mmdetection.readthedocs.io/en/dev-3.x/user_guides/dataset_prepare.html) 此外,请参考 MMDetection 的[安装](https://mmdetection.readthedocs.io/en/dev-3.x/get_started.html)和[数据准备](https://mmdetection.readthedocs.io/en/dev-3.x/user_guides/dataset_prepare.html)
## Train ## 训练
After installation, you can run MMDetection with simple command. 安装完后,您可以使用如下的简单命令运行 MMDetection。
```shell ```shell
# distributed version # distributed version
@ -29,20 +29,20 @@ bash tools/benchmarks/mmdetection/mim_slurm_train_c4.sh ${PARTITION} ${CONFIG} $
bash tools/benchmarks/mmdetection/mim_slurm_train_fpn.sh ${PARTITION} ${CONFIG} ${PRETRAIN} bash tools/benchmarks/mmdetection/mim_slurm_train_fpn.sh ${PARTITION} ${CONFIG} ${PRETRAIN}
``` ```
Remarks: 备注:
- `CONFIG`: Use config files under `configs/benchmarks/mmdetection/`. Since repositories of OpenMMLab have support referring config files across different repositories, we can easily leverage the configs from MMDetection like: - `${CONFIG}`: 使用`configs/benchmarks/mmdetection/`下的配置文件。由于 OpenMMLab 的算法库支持跨不同存储库引用配置文件,因此我们可以轻松使用 MMDetection 的配置文件,例如:
```shell ```shell
_base_ = 'mmdet::mask_rcnn/mask-rcnn_r50-caffe-c4_1x_coco.py' _base_ = 'mmdet::mask_rcnn/mask-rcnn_r50-caffe-c4_1x_coco.py'
``` ```
Writing your config files from scratch is also supported. 从头开始写您的配置文件也是支持的。
- `PRETRAIN`: the pre-trained model file. - `${PRETRAIN}`:预训练模型文件
- `GPUS`: The number of GPUs that you want to use to train. We adopt 8 GPUs for detection tasks by default. - `${GPUS}`: 您想用于训练的 GPU 数量,对于检测任务,我们默认采用 8 块 GPU。
Example: 例子:
```shell ```shell
bash ./tools/benchmarks/mmdetection/mim_dist_train_c4.sh \ bash ./tools/benchmarks/mmdetection/mim_dist_train_c4.sh \
@ -50,8 +50,8 @@ configs/benchmarks/mmdetection/coco/mask-rcnn_r50-c4_ms-1x_coco.py \
https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth 8 https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth 8
``` ```
Or if you want to do detection task with [detectron2](https://github.com/facebookresearch/detectron2), we also provide some config files. 或者您想用 [detectron2](https://github.com/facebookresearch/detectron2) 来做检测任务,我们也提供了一些配置文件。
Please refer to [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md) for installation and follow the [directory structure](https://github.com/facebookresearch/detectron2/tree/main/datasets) to prepare your datasets required by detectron2. 请参考 [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/main/INSTALL.md) 用于安装并按照 detectron2 需要的[目录结构](https://github.com/facebookresearch/detectron2/tree/main/datasets)准备您的数据集。
```shell ```shell
conda activate detectron2 # use detectron2 environment here, otherwise use open-mmlab environment conda activate detectron2 # use detectron2 environment here, otherwise use open-mmlab environment
@ -60,9 +60,9 @@ python convert-pretrain-to-detectron2.py ${WEIGHT_FILE} ${OUTPUT_FILE} # must us
bash run.sh ${DET_CFG} ${OUTPUT_FILE} bash run.sh ${DET_CFG} ${OUTPUT_FILE}
``` ```
## Test ## 测试
After training, you can also run the command below to test your model. 在训练之后,您可以运行如下命令测试您的模型。
```shell ```shell
# distributed version # distributed version
@ -72,11 +72,11 @@ bash tools/benchmarks/mmdetection/mim_dist_test.sh ${CONFIG} ${CHECKPOINT} ${GPU
bash tools/benchmarks/mmdetection/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT} bash tools/benchmarks/mmdetection/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT}
``` ```
Remarks: 备注:
- `CHECKPOINT`: The well-trained detection model that you want to test. - `${CHECKPOINT}`:您想测试的训练好的检测模型。
Example: 例子:
```shell ```shell
bash ./tools/benchmarks/mmdetection/mim_dist_test.sh \ bash ./tools/benchmarks/mmdetection/mim_dist_test.sh \

View File

@ -1,23 +1,23 @@
# Segmentation # 分割
- [Segmentation](#segmentation) - [分割](#分割)
- [Train](#train) - [训练](#训练)
- [Test](#test) - [测试](#测试)
For semantic segmentation task, we use MMSegmentation. First, make sure you have installed [MIM](https://github.com/open-mmlab/mim), which is also a project of OpenMMLab. 对于语义分割任务我们使用 MMSegmentation。首先确保您已经安装了 [MIM](https://github.com/open-mmlab/mim),这也是 OpenMMLab 的一个项目。
```shell ```shell
pip install openmim pip install openmim
mim install 'mmsegmentation>=1.0.0rc0' mim install 'mmsegmentation>=1.0.0rc0'
``` ```
It is very easy to install the package. 非常容易安装这个包。
Besides, please refer to MMSegmentation for [installation](https://mmsegmentation.readthedocs.io/en/dev-1.x/get_started.html) and [data preparation](https://mmsegmentation.readthedocs.io/en/dev-1.x/user_guides/2_dataset_prepare.html). 此外,请参考 MMSegmentation 的[安装](https://mmsegmentation.readthedocs.io/en/dev-1.x/get_started.html)和[数据准备](https://mmsegmentation.readthedocs.io/en/dev-1.x/user_guides/2_dataset_prepare.html)。
## Train ## 训练
After installation, you can run MMSeg with simple command. 在安装完后,可以使用如下简单命令运行 MMSegmentation。
```shell ```shell
# distributed version # distributed version
@ -27,21 +27,20 @@ bash tools/benchmarks/mmsegmentation/mim_dist_train.sh ${CONFIG} ${PRETRAIN} ${G
bash tools/benchmarks/mmsegmentation/mim_slurm_train.sh ${PARTITION} ${CONFIG} ${PRETRAIN} bash tools/benchmarks/mmsegmentation/mim_slurm_train.sh ${PARTITION} ${CONFIG} ${PRETRAIN}
``` ```
Remarks: 备注:
- `CONFIG`: Use config files under `configs/benchmarks/mmsegmentation/`. Since repositories of OpenMMLab have support referring config files across different - `${CONFIG}`:使用`configs/benchmarks/mmsegmentation/`下的配置文件。由于 OpenMMLab 的算法库支持跨不同存储库引用配置文件,因此我们可以轻松使用 MMSegmentation 的配置文件,例如:
repositories, we can easily leverage the configs from MMSegmentation like:
```shell ```shell
_base_ = 'mmseg::fcn/fcn_r50-d8_4xb2-40k_cityscapes-769x769.py' _base_ = 'mmseg::fcn/fcn_r50-d8_4xb2-40k_cityscapes-769x769.py'
``` ```
Writing your config files from scratch is also supported. 从头开始写您的配置文件也是支持的。
- `PRETRAIN`: the pre-trained model file. - `${PARTITION}`:预训练模型文件
- `GPUS`: The number of GPUs that you want to use to train. We adopt 4 GPUs for segmentation tasks by default. - `${GPUS}`: 您想用于训练的 GPU 数量,对于分割任务,我们默认采用 4 块 GPU。
Example: 例子:
```shell ```shell
bash ./tools/benchmarks/mmsegmentation/mim_dist_train.sh \ bash ./tools/benchmarks/mmsegmentation/mim_dist_train.sh \
@ -49,9 +48,9 @@ configs/benchmarks/mmsegmentation/voc12aug/fcn_r50-d8_4xb4-20k_voc12aug-512x512.
https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth 4 https://download.openmmlab.com/mmselfsup/1.x/byol/byol_resnet50_16xb256-coslr-200e_in1k/byol_resnet50_16xb256-coslr-200e_in1k_20220825-de817331.pth 4
``` ```
## Test ## 测试
After training, you can also run the command below to test your model. 在训练之后,您可以运行如下命令测试您的模型。
```shell ```shell
# distributed version # distributed version
@ -61,11 +60,11 @@ bash tools/benchmarks/mmsegmentation/mim_dist_test.sh ${CONFIG} ${CHECKPOINT} ${
bash tools/benchmarks/mmsegmentation/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT} bash tools/benchmarks/mmsegmentation/mim_slurm_test.sh ${PARTITION} ${CONFIG} ${CHECKPOINT}
``` ```
Remarks: 备注:
- `CHECKPOINT`: The well-trained segmentation model that you want to test. - `${CHECKPOINT}`:您想测试的训练好的分割模型。
Example: 例子:
```shell ```shell
bash ./tools/benchmarks/mmsegmentation/mim_dist_test.sh \ bash ./tools/benchmarks/mmsegmentation/mim_dist_test.sh \