[Docs] Update config.md, faq.md and pull_request_template.md (#190)

* Update Chinese faq.md

add the description of the difference between MMYOLO
 and MMDet

* Update English faq.md

add the description of the difference between MMYOLO
and MMDet

* Update Chinese config.md

* Update English config.md

* Update pull_request_template.md

* Delete the unnecessary description in config.md

* Delete the unnecessary description in config.md
pull/249/head
Range King 2022-10-22 21:25:47 +08:00 committed by Haian Huang(深度眸)
parent 15e0cfa4b5
commit 2f5d16f5f1
5 changed files with 77 additions and 41 deletions

View File

@ -2,7 +2,7 @@ Thanks for your contribution and we appreciate it a lot. The following instructi
## Motivation
Please describe the motivation of this PR and the goal you want to achieve through this PR.
Please describe the motivation for this PR and the goal you want to achieve through this PR.
## Modification
@ -10,16 +10,16 @@ Please briefly describe what modification is made in this PR.
## BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repos?
Does the modification introduce changes that break the backward compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
## Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
## Checklist
1. Pre-commit or other linting tools are used to fix the potential lint issues.
2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDetection or MMClassification.
1. Pre-commit or other linting tools are used to fix potential lint issues.
2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
3. If the modification has a potential influence on downstream projects, this PR should be tested with downstream projects, like MMDetection or MMClassification.
4. The documentation has been modified accordingly, like docstring or example tutorials.

View File

@ -1 +1,19 @@
# Frequently Asked Questions
We list some common problems many users face and their corresponding solutions here. Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them. If the contents here do not cover your issue, please create an [issue](https://github.com/open-mmlab/mmyolo/issues/new/choose) and make sure you fill in all the required information in the template.
## Why do we need to launch MMYOLO? Why do we need to open a separate repository instead of putting it directly into MMDetection?
Since the open source, we have been receiving similar questions from our community partners, and the answers can be summarized in the following three points.
**(1) Unified operation and inference platform**
At present, there are very many improved algorithms for YOLO in the field of target detection, and they are very popular, but such algorithms are based on different frameworks for different back-end implementations, and there are large differences, lacking a unified and convenient fair evaluation process from training to deployment.
**(2) Protocol limitations**
As we all know, YOLOv5 and its derived algorithms such as YOLOv6 and YOLOv7 are GPL 3.0 protocols, which are different from the Apache protocol of MMDetection. Due to the protocol issue, it is not possible to incorporate MMYOLO directly into MMDetection.
**(3) Multitasking support**
There is another far-reaching reason: **MMYOLO tasks are not limited to MMDetection**, and more tasks will be supported in the future, such as MMPose based keypoint related applications and MMTracking based tracking related applications, so it is not suitable to be directly incorporated into MMDetection.

View File

@ -4,7 +4,7 @@ MMYOLO and other OpenMMLab repositories use [MMEngine's config system](https://m
## Config file content
MMYOLO uses a modular design, all modules with different functions can be configured through the config. Taking [YOLOv5-s](https://github.com/open-mmlab/mmyolo/blob/main/configs/yolov5/yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py) as an example, we will introduce each field in the config according to different function modules:
MMYOLO uses a modular design, all modules with different functions can be configured through the config. Taking [yolov5_s-v61_syncbn_8xb16-300e_coco.py](https://github.com/open-mmlab/mmyolo/blob/main/configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py) as an example, we will introduce each field in the config according to different function modules:
### Important parameters
@ -55,7 +55,7 @@ model = dict(
norm_cfg=dict(type='BN', momentum=0.03, eps=0.001), # The config of normalization layers.
act_cfg=dict(type='SiLU', inplace=True)), # The config of activation function
bbox_head=dict(
type='YOLOv5Head', # THe type of BBox head is 'YOLOv5Head', we also support 'YOLOv6Head', 'YOLOXHead'
type='YOLOv5Head', # The type of BBox head is 'YOLOv5Head', we also support 'YOLOv6Head', 'YOLOXHead'
head_module=dict(
type='YOLOv5HeadModule', # The type of Head module is 'YOLOv5HeadModule', we also support 'YOLOv6HeadModule', 'YOLOXHeadModule'
num_classes=80, # Number of classes for classification
@ -69,7 +69,7 @@ model = dict(
strides=strides), # The strides of the anchor generator. This is consistent with the FPN feature strides. The strides will be taken as base_sizes if base_sizes is not set.
),
test_cfg=dict(
multi_label=True, # The config of multi-label for multi-clas prediction. THe default setting is True.
multi_label=True, # The config of multi-label for multi-clas prediction. The default setting is True.
nms_pre=30000, # The number of boxes before NMS
score_thr=0.001, # Threshold to filter out boxes.
nms=dict(type='nms', # Type of NMS
@ -151,7 +151,7 @@ train_dataloader = dict( # Train dataloader config
pipeline=train_pipeline))
```
In the testing phase of YOLOv5, the `Letter Resize` method resizes all the test images to the same scale, which preserves the aspect ratio of all testing images. Therefore, the validation and testing phases share the same data pipeline.
In the testing phase of YOLOv5, the [Letter Resize](https://github.com/open-mmlab/mmyolo/blob/main/mmyolo/datasets/transforms/transforms.py#L116) method resizes all the test images to the same scale, which preserves the aspect ratio of all testing images. Therefore, the validation and testing phases share the same data pipeline.
```python
test_pipeline = [ # Validation/ Testing dataloader config
@ -270,7 +270,7 @@ test_cfg = dict(type='TestLoop') # The testing loop type
### Optimization config
`optim_wrapper` is the field to configure optimization-related settings. The optimizer wrapper not only provides the functions of the optimizer, but also supports functions such as gradient clipping, mixed precision training, etc. Find out more in [optimizer wrapper tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/optimizer.html).
`optim_wrapper` is the field to configure optimization-related settings. The optimizer wrapper not only provides the functions of the optimizer but also supports functions such as gradient clipping, mixed precision training, etc. Find out more in the [optimizer wrapper tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/optimizer.html).
```python
optim_wrapper = dict( # Optimizer wrapper config
@ -286,7 +286,7 @@ optim_wrapper = dict( # Optimizer wrapper config
constructor='YOLOv5OptimizerConstructor') # The constructor for YOLOv5 optimizer
```
`param_scheduler` is the field that configures methods of adjusting optimization hyperparameters such as learning rate and momentum. Users can combine multiple schedulers to create a desired parameter adjustment strategy. Find more in [parameter scheduler tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/param_scheduler.html). In MMYOLO, no parameter optimizer is introduced.
`param_scheduler` is the field that configures methods of adjusting optimization hyperparameters such as learning rate and momentum. Users can combine multiple schedulers to create a desired parameter adjustment strategy. Find more in the [parameter scheduler tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/param_scheduler.html). In MMYOLO, no parameter optimizer is introduced.
```python
param_scheduler = None
@ -296,7 +296,7 @@ param_scheduler = None
Users can attach hooks to training, validation, and testing loops to insert some operations during running. There are two different hook fields, one is `default_hooks` and the other is `custom_hooks`.
`default_hooks` is a dict of hook configs for the hooks that must be required at the runtime. They have default priority which should not be modified. If not set, runner will use the default values. To disable a default hook, users can set its config to `None`.
`default_hooks` is a dict of hook configs for the hooks that must be required at the runtime. They have default priority which should not be modified. If not set, the runner will use the default values. To disable a default hook, users can set its config to `None`.
```python
default_hooks = dict(
@ -311,7 +311,7 @@ default_hooks = dict(
max_keep_ckpts=3)) # The maximum checkpoints to keep.
```
`custom_hooks` is a list of hook configs. Users can develop their own hooks and insert them in this field.
`custom_hooks` is a list of hook configs. Users can develop their hooks and insert them in this field.
```python
custom_hooks = [
@ -354,14 +354,14 @@ resume = False # Whether to resume from the checkpoint defined in `load_from`.
`config/_base_` contains default runtime. The configs that are composed of components from `_base_` are called _primitive_.
For all configs under the same folder, it is recommended to have only **one** _primitive_ config. All other configs should be inheritred from the _primitive_ config. In this way, the maximum of inheritance level is 3.
For all configs under the same folder, it is recommended to have only **one** _primitive_ config. All other configs should be inherited from the _primitive_ config. In this way, the maximum inheritance level is 3.
For easy understanding, we recommend contributors inherit from existing methods.
For example, if some modification is made based on YOLOv5-s, such as modifying the depth of the network, users may first inherit the `_base_ = ./yolov5_s-v61_syncbn_8xb16-300e_coco.py `, then modify the necessary fields in the config files.
If you are building an entirely new method that does not share the structure with any of the existing methods, you may create a folder `yolov100` under `configs`,
Please refer to [mmengine config tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/config.html) for more detailes.
Please refer to the [mmengine config tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/config.html) for more details.
By setting the `_base_` field, we can set which files the current configuration file inherits from.
@ -385,7 +385,7 @@ If you wish to inspect the config file, you may run `mim run mmdet print_config
### Ignore some fields in the base configs
Sometimes, you may set `_delete_=True` to ignore some of the fields in base configs.
You may refer to [mmengine config tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/config.html) for simple illustration.
You may refer to the [mmengine config tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/config.html) for a simple illustration.
In MMYOLO, for example, to change the backbone of YOLOv5 with the following config.
@ -403,7 +403,7 @@ model = dict(
bbox_head=dict(...))
```
The `_delete_=True` would replace all old keys in `backbone` field with new keys. For example, `YOLOv5` uses `YOLOv5CSPDarknet`, it is necessary to replace the backbone to `YOLOv6EfficientRep`. Since `YOLOv5CSPDarknet` and `YOLOv6EfficientRep` have different fields, you need to use `_delete_=True` to replace all old keys in the `backbone` field.
The `_delete_=True` would replace all old keys in the `backbone` field with new keys. For example, `YOLOv5` uses `YOLOv5CSPDarknet`, it is necessary to replace the backbone with `YOLOv6EfficientRep`. Since `YOLOv5CSPDarknet` and `YOLOv6EfficientRep` have different fields, you need to use `_delete_=True` to replace all old keys in the `backbone` field.
```python
_base_ = '../yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py'
@ -422,9 +422,9 @@ model = dict(
### Use intermediate variables in configs
Some intermediate variables are used in the configs files, like `train_pipeline` and `test_pipeline` in datasets. It's worth noting that when modifying intermediate variables in the children configs, users need to pass the intermediate variables into corresponding fields again.
For example, we would like to change the `image_scale` during training and add `YOLOv5MixUp` data augmentation, `img_scale/train_pipeline/test_pipeline` are intermediate variable we would like modify.
For example, we would like to change the `image_scale` during training and add `YOLOv5MixUp` data augmentation, `img_scale/train_pipeline/test_pipeline` are intermediate variables we would like to modify.
**Notice**: `YOLOv5MixUp` requires adding the `pre_transform` and `mosaic_affine_pipeline` to its own `pre_transform` field. Please refer to [The description of YOLOv5 algorithm and its implementation](../algorithm_descriptions/yolov5_description.md) for detailed process and diagrams.
**Notice**: `YOLOv5MixUp` requires adding the `pre_transform` and `mosaic_affine_pipeline` to its `train_pipeline` field. Please refer to [The description of YOLOv5 algorithm and its implementation](../algorithm_descriptions/yolov5_description.md) for detailed processes and diagrams.
```python
_base_ = './yolov5_s-v61_syncbn_8xb16-300e_coco.py'
@ -494,7 +494,7 @@ val_dataloader = dict(dataset=dict(pipeline=test_pipeline))
test_dataloader = dict(dataset=dict(pipeline=test_pipeline))
```
We first define new `train_pipeline`/`test_pipeline` and pass it into `data`.
We first define a new `train_pipeline`/`test_pipeline` and pass it into `data`.
Likewise, if we want to switch from `SyncBN` to `BN` or `MMSyncBN`, we need to modify every `norm_cfg` in the configuration file.
@ -509,7 +509,7 @@ model = dict(
### Reuse variables in \_base\_ file
If the users want to reuse the variables in the base file, they can get a copy of the corresponding variable by using `{{_base_.xxx}}`. The latest version of MMEngine also support reusing variables without `{{}}` usage.
If the users want to reuse the variables in the base file, they can get a copy of the corresponding variable by using `{{_base_.xxx}}`. The latest version of MMEngine also supports reusing variables without `{{}}` usage.
E.g:
@ -545,16 +545,16 @@ We follow the below style to name config files. Contributors are advised to foll
{algorithm name}_{model component names [component1]_[component2]_[...]}-[version id]_[norm setting]_[data preprocessor type]_{training settings}_{training dataset information}_{testing dataset information}.py
```
The file name is divided into 8 name fields, which has 4 required parts and 4 optional parts. All parts and components are connected with `_` and words of each part or component should be connected with `-`. `{}` indicates the required name field, `[]` indicates the optional name field.
The file name is divided into 8 name fields, which have 4 required parts and 4 optional parts. All parts and components are connected with `_` and words of each part or component should be connected with `-`. `{}` indicates the required name field, and `[]` indicates the optional name field.
- `{algorithm name}`: The name of the algorithm. It can be a detector name such as `yolov5`, `yolov6`, `yolox` etc.
- `{algorithm name}`: The name of the algorithm. It can be a detector name such as `yolov5`, `yolov6`, `yolox`, etc.
- `{component names}`: Names of the components used in the algorithm such as backbone, neck, etc. For example, `yolov5_s` means its `deepen_factor` is `0.33` and its `widen_factor` is `0.5`.
- `[version_id]` (optional): Since the evolution of YOLO series is much faster than traditional object detection algorithms, `version id` is used to distinguish the differences between different sub-versions. E.g, YOLOv5-3.0 uses the `Focus` layer as the stem layer, and YOLOv5-6.0 uses the `Conv` layer as the stem layer.
- `[version_id]` (optional): Since the evolution of the YOLO series is much faster than traditional object detection algorithms, `version id` is used to distinguish the differences between different sub-versions. E.g, YOLOv5-3.0 uses the `Focus` layer as the stem layer, and YOLOv5-6.0 uses the `Conv` layer as the stem layer.
- `[norm_setting]` (optional): `bn` indicates `Batch Normalization`, `syncbn` indicates `Synchronized Batch Normalization`
- `[data preprocessor type]` (optional): `fast` incorporates `YOLOv5DetDataPreprocessor` and `yolov5_collate` to preprocess data. The training speed is faster than default `mmdet.DetDataPreprocessor`, while results in extending tge overall pipeline to multi-task learning.
- `{training settings}`: Information of training settings such as batch size, augmentations, loss trick, scheduler, and epochs/iterations. For example: `8xb16-300e_coco` means using 8-gpus x 16-images-per-gpu, and train 300 epochs.
- `[data preprocessor type]` (optional): `fast` incorporates [YOLOv5DetDataPreprocessor](https://github.com/open-mmlab/mmyolo/blob/main/mmyolo/models/data_preprocessors/data_preprocessor.py#L9) and [yolov5_collate](https://github.com/open-mmlab/mmyolo/blob/main/mmyolo/datasets/utils.py#L12) to preprocess data. The training speed is faster than the default `mmdet.DetDataPreprocessor`, while results in extending the overall pipeline to multi-task learning.
- `{training settings}`: Information of training settings such as batch size, augmentations, loss trick, scheduler, and epochs/iterations. For example: `8xb16-300e_coco` means using 8-GPUs x 16-images-per-GPU, and train 300 epochs.
Some abbreviations:
- `{gpu x batch_per_gpu}`: GPUs and samples per GPU. `bN` indicates N batch size per GPU. E.g. `4xb4` is the short term of 4-gpus x 4-images-per-gpu. And `8xb2` is used by default if not mentioned.
- `{gpu x batch_per_gpu}`: GPUs and samples per GPU. For example, `4xb4` is the short term of 4-GPUs x 4-images-per-GPU.
- `{schedule}`: training schedule, default option in MMYOLO is 300 epochs.
- `{training dataset information}`: Training dataset names like `coco`, `cityscapes`, `voc-0712`, `wider-face`, `balloon`.
- `{training dataset information}`: Training dataset names like `coco`, `cityscapes`, `voc-0712`, `wider-face`, and `balloon`.
- `[testing dataset information]` (optional): Testing dataset name for models trained on one dataset but tested on another. If not mentioned, it means the model was trained and tested on the same dataset type.

View File

@ -1 +1,19 @@
# 常见问题解答
我们在这里列出了使用时的一些常见问题及其相应的解决方案。 如果您发现有一些问题被遗漏,请随时提 PR 丰富这个列表。 如果您无法在此获得帮助,请创建 [issue](https://github.com/open-mmlab/mmyolo/issues/new/choose) 提问,但是请在模板中填写所有必填信息,这有助于我们更快定位问题。
## 为什么要推出 MMYOLO 为何要单独开一个仓库而不是直接放到 MMDetection 中?
自从开源后,不断收到社区小伙伴们类似的疑问,答案可以归纳为以下三点:
**(1) 统一运行和推理平台**
目前目标检测领域出现了非常多 YOLO 的改进算法,并且非常受大家欢迎,但是这类算法基于不同框架不同后端实现,存在较大差异,缺少统一便捷的从训练到部署的公平评测流程。
**(2) 协议限制**
众所周知YOLOv5 以及其衍生的 YOLOv6 和 YOLOv7 等算法都是 GPL 3.0 协议,不同于 MMDetection 的 Apache 协议。由于协议问题,无法将 MMYOLO 直接并入 MMDetection 中。
**(3) 多任务支持**
还有一层深远的原因: **MMYOLO 任务不局限于 MMDetection**,后续会支持更多任务例如基于 MMPose 实现关键点相关的应用,基于 MMTracking 实现追踪相关的应用,因此不太适合直接并入 MMDetection 中。

View File

@ -4,7 +4,7 @@ MMYOLO 和其他 OpenMMLab 仓库使用 [MMEngine 的配置文件系统](https:/
## 配置文件的内容
MMYOLO 采用模块化设计,所有功能的模块都可以通过配置文件进行配置。 以 [YOLOv5-s](https://github.com/open-mmlab/mmyolo/blob/main/configs/yolov5/yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py) 为例,我们将根据不同的功能模块介绍配置文件中的各个字段:
MMYOLO 采用模块化设计,所有功能的模块都可以通过配置文件进行配置。 以 [yolov5_s-v61_syncbn_8xb16-300e_coco.py](https://github.com/open-mmlab/mmyolo/blob/main/configs/yolov5/yolov5_s-v61_syncbn_8xb16-300e_coco.py) 为例,我们将根据不同的功能模块介绍配置文件中的各个字段:
### 重要参数
@ -151,7 +151,7 @@ train_dataloader = dict( # 训练 dataloader 配置
pipeline=train_pipeline)) # 这是由之前创建的 train_pipeline 定义的数据处理流程
```
YOLOv5 测试阶段采用 `Letter Resize` 的方法来将所有的测试图像统一到相同尺度,进而有效保留了图像的长宽比。因此我们在验证和评测时,都采用相同的数据流进行推理。
YOLOv5 测试阶段采用 [Letter Resize](https://github.com/open-mmlab/mmyolo/blob/main/mmyolo/datasets/transforms/transforms.py#L116) 的方法来将所有的测试图像统一到相同尺度,进而有效保留了图像的长宽比。因此我们在验证和评测时,都采用相同的数据流进行推理。
```python
test_pipeline = [ # 测试数据处理流程
@ -542,14 +542,14 @@ pre_transform = _base_.pre_transform # 变量 pre_transform 等于 _base_ 中定
文件名分为 8 个部分,其中 4 个必填部分、4 个可选部分。 每个部分用 `_` 连接,每个部分内的单词应该用 `-` 连接。`{}` 表示必填部分,`[]` 表示选填部分。
- `{algorithm name}`: 算法的名称。 它可以是检测器名称,例如 `yolov5`, `yolov6`, `yolox` 等。
- `{component names}`: 算法中使用的组件名称,如 backbone、neck 等。例如 yolov5_s代表其深度缩放因子`deepen_factor=0.33` 以及其宽度缩放因子 `widen_factor=0.5`
- `[version_id]` (可选): 由于 YOLO 系列算法迭代速度远快于传统目标检测算法,因此采用 `version id` 来区分不同子版本之间的差异。例如 YOLOv5 的 3.0 版本采用 `Focus` 层作为第一个下采样层,而 6.0 以后的版本采用 `Conv` 层作为第一个下采样层。
- `[norm_setting]` (可选): `bn` 表示 `Batch Normalization` `syncbn` 表示 `Synchronized Batch Normalization`
- `[data preprocessor type]` (可选): `fast` 表示调用 `YOLOv5DetDataPreprocessor` 并配合 `yolov5_collate` 进行数据预处理,训练速度比默认的 `mmdet.DetDataPreprocessor` 更快,但是对多任务处理的灵活性较低。
- `{training settings}`: 训练设置的信息,例如 batch 大小、数据增强、损失、参数调度方式和训练最大轮次/迭代。 例如:`8xb16-300e_coco` 表示使用 8 个 gpu 每个 gpu 16 张图,并训练 300 个 epoch。
- `{algorithm name}`算法的名称。 它可以是检测器名称,例如 `yolov5`, `yolov6`, `yolox` 等。
- `{component names}`算法中使用的组件名称,如 backbone、neck 等。例如 yolov5_s代表其深度缩放因子`deepen_factor=0.33` 以及其宽度缩放因子 `widen_factor=0.5`
- `[version_id]` (可选)由于 YOLO 系列算法迭代速度远快于传统目标检测算法,因此采用 `version id` 来区分不同子版本之间的差异。例如 YOLOv5 的 3.0 版本采用 `Focus` 层作为第一个下采样层,而 6.0 以后的版本采用 `Conv` 层作为第一个下采样层。
- `[norm_setting]` (可选)`bn` 表示 `Batch Normalization` `syncbn` 表示 `Synchronized Batch Normalization`
- `[data preprocessor type]` (可选)`fast` 表示调用 [YOLOv5DetDataPreprocessor](https://github.com/open-mmlab/mmyolo/blob/main/mmyolo/models/data_preprocessors/data_preprocessor.py#L9) 并配合 [yolov5_collate](https://github.com/open-mmlab/mmyolo/blob/main/mmyolo/datasets/utils.py#L12) 进行数据预处理,训练速度比默认的 `mmdet.DetDataPreprocessor` 更快,但是对多任务处理的灵活性较低。
- `{training settings}`训练设置的信息,例如 batch 大小、数据增强、损失、参数调度方式和训练最大轮次/迭代。 例如:`8xb16-300e_coco` 表示使用 8 个 GPU 每个 GPU 16 张图,并训练 300 个 epoch。
缩写介绍:
- `{gpu x batch_per_gpu}`: GPU 数和每个 GPU 的样本数。`bN` 表示每个 GPU 上的 batch 大小为 N。例如 `4x4b` 是 4 个 GPU 每个 GPU 4 张图的缩写。如果没有注明,默认为 8 卡每卡 2 张图。
- `{schedule}`: 训练方案MMYOLO 中默认为 300 个 epoch。
- `{training dataset information}`: 训练数据集,例如 `coco`, `cityscapes`, `voc-0712`, `wider-face`, `balloon`
- `[testing dataset information]` (可选): 测试数据集,用于训练和测试在不同数据集上的模型配置。 如果没有注明,则表示训练和测试的数据集类型相同。
- `{gpu x batch_per_gpu}`GPU 数和每个 GPU 的样本数。例如 `4x4b` 是 4 个 GPU 每个 GPU 4 张图的缩写。
- `{schedule}`训练方案MMYOLO 中默认为 300 个 epoch。
- `{training dataset information}`训练数据集,例如 `coco`, `cityscapes`, `voc-0712`, `wider-face`, `balloon`
- `[testing dataset information]` (可选)测试数据集,用于训练和测试在不同数据集上的模型配置。 如果没有注明,则表示训练和测试的数据集类型相同。