[Doc]How to write config (#139)

* add ncnn test exporter in test_ops.py

* add ncnn test exporter in utils.py

* add onnxruntime and tensorrt ops test

* fix blank line

* fix comment
add nms ops test

* remove nms test

* add test sample
add dockerstring

* remove nms test

* fix grid_sample
add type hind

* fix problem

* fix dockerstring

* add nms batch_nms multi_level_roi_align

* add test data

* fix problem

* rm pkl file dependent

* rm file

* add docstring

* remove multi_level_dependce

* add mmseg module unittest

* add mmseg test

* add mmseg model unit test

* fix blankline

* rename file

* add syncbn2bn unit test

* add apis/export

* lint

* lint

* ??

* delete#

* fix problems

* add mmcv unit test

* add docs about how to create config file

* fix :

* add zh docs about how to create config

* add full example

* fix comment

* add note

* fix problem

* fix catalog

* fix catalog`

* fix catalog

* fix docs

* fix cn docs

* fix lint

* fix docs

* fix space

* add mmocr link

* fix problem

* fix new

Co-authored-by: SingleZombie <singlezombie@163.com>
pull/12/head
VVsssssk 2021-10-29 18:04:11 +08:00 committed by GitHub
parent ef88d20241
commit 985bb6ad34
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 408 additions and 6 deletions

View File

@ -46,7 +46,7 @@ Please refer to [build.md](docs/build.md) for installation.
## Getting Started
Please read [how_to_convert_model.md](docs/tutorials/how_to_convert_model.md) for the basic usage of MMDeploy. There are also tutorials for [how to create config](docs/tutorials/how_to_create_config.md), [how to support new models](docs/tutorials/how_to_support_new_models.md) and [how to test model](docs/tutorials/how_to_test_model.md).
Please read [how_to_convert_model.md](docs/tutorials/how_to_convert_model.md) for the basic usage of MMDeploy. There are also tutorials for [how to write config](docs/tutorials/how_to_write_config.md), [how to support new models](docs/tutorials/how_to_support_new_models.md) and [how to test model](docs/tutorials/how_to_test_model.md).
Please refer to [FAQ](docs/faq.md) for frequently asked questions.

View File

@ -47,7 +47,7 @@ MMDeploy 是一个开源深度学习模型部署工具箱,它是 [OpenMMLab](h
请阅读 [如何进行模型转换](docs/tutorials/how_to_convert_model.md) 来了解基本的 MMDeploy 使用。
我们还提供了诸如 [如何构建配置文件](docs/tutorials/how_to_create_config.md) [如何添加新模型支持](docs/tutorials/how_to_support_new_models.md) 和 [如何测试模型效果](docs/tutorials/how_to_test_model.md) 等教程。
我们还提供了诸如 [如何编写配置文件](docs/tutorials/how_to_write_config.md) [如何添加新模型支持](docs/tutorials/how_to_support_new_models.md) 和 [如何测试模型效果](docs/tutorials/how_to_test_model.md) 等教程。
如果遇到问题,请参考 [常见问题解答](docs/faq.md)。

View File

@ -14,7 +14,7 @@ You can switch between Chinese and English documents in the lower-left corner of
:caption: Tutorials
tutorials/how_to_convert_model.md
tutorials/how_to_create_config.md
tutorials/how_to_write_config.md
tutorials/how_to_evaluate_a_model.md
tutorials/how_to_test_model.md
tutorials/how_to_support_new_models.md

View File

@ -1 +0,0 @@
## How to create config

View File

@ -0,0 +1,201 @@
## How to write config
This tutorial describes how to write a config for model conversion and deployment. A deployment config includes `onnx config`, `codebase config`, `backend config`.
<!-- TOC -->
- [How to write config](#how-to-write-config)
- [1. How to write onnx config](#1-how-to-write-onnx-config)
- [Description of onnx config arguments](#description-of-onnx-config-arguments)
- [Example](#example)
- [If you need to use dynamic axes](#if-you-need-to-use-dynamic-axes)
- [Example](#example-1)
- [2. How to write codebase config](#2-how-to-write-codebase-config)
- [Description of codebase config arguments](#description-of-codebase-config-arguments)
- [Example](#example-2)
- [If you need to use the partition model](#if-you-need-to-use-the-partition-model)
- [Example](#example-3)
- [List of tasks in all codebases](#list-of-tasks-in-all-codebases)
- [3. How to write backend config](#3-how-to-write-backend-config)
- [Example](#example-4)
- [4. A complete example of mmcls on TensorRT](#4-a-complete-example-of-mmcls-on-tensorrt)
- [5. How to write model config](#5-how-to-write-model-config)
- [6. Reminder](#6-reminder)
- [7. FAQs](#7-faqs)
<!-- TOC -->
### 1. How to write onnx config
Onnx config to describe how to export a model from pytorch to onnx.
#### Description of onnx config arguments
- `type`: Type of config dict. Default is `onnx`.
- `export_params`: If specified, all parameters will be exported. Set this to False if you want to export an untrained model.
- `keep_initializers_as_inputs`: If True, all the initializers (typically corresponding to parameters) in the exported graph will also be added as inputs to the graph. If False, then initializers are not added as inputs to the graph, and only the non-parameter inputs are added as inputs.
- `opset_version`: Opset_version is 11 by default.
- `save_file`: Output onnx file.
- `input_names`: Names to assign to the input nodes of the graph.
- `output_names`: Names to assign to the output nodes of the graph.
- `input_shape`: The height and width of input tensor to the model.
##### Example
```python
onnx_config = dict(
type='onnx',
export_params=True,
keep_initializers_as_inputs=False,
opset_version=11,
save_file='end2end.onnx',
input_names=['input'],
output_names=['output'],
input_shape=None)
```
#### If you need to use dynamic axes
If the dynamic shape of inputs and outputs is required, you need to add dynamic_axes dict in onnx config.
- `dynamic_axes`: Describe the dimensional information about input and output.
##### Example
```python
dynamic_axes={
'input': {
0: 'batch',
2: 'height',
3: 'width'
},
'dets': {
0: 'batch',
1: 'num_dets',
},
'labels': {
0: 'batch',
1: 'num_dets',
},
}
```
### 2. How to write codebase config
Codebase config part contains information like codebase type and task type.
#### Description of codebase config arguments
- `type`: Model's codebase, including `mmcls`, `mmdet`, `mmseg`, `mmocr`, `mmedit`.
- `task`: Model's task type, referring to [List of tasks in all codebases](#list-of-tasks-in-all-codebases).
##### Example
```python
codebase_config = dict(type='mmcls', task='Classification')
```
#### If you need to use the partition model
If you want to partition model , you need to add partition configuration dict. Note that currently only the MMDetection model supports partitioning.
- `type`: Model's task type, referring to -[List of tasks in all codebases](#list-of-tasks-in-all-codebases).
##### Example
```python
partition_config = dict(type='single_stage', apply_marks=True)
```
#### List of tasks in all codebases
| codebase | task | partition |
| :--------------: | :--------------: | :-------: |
| mmcls | classification | N |
| mmdet | single-stage | Y |
| mmdet | two-stage | Y |
| mmseg | segmentation | N |
| mmocr | text-detection | N |
| mmocr | text-recognition | N |
| mmedit | supe-resolution | N |
### 3. How to write backend config
The backend config is mainly used to specify the backend on which model runs and provide the information needed when the model runs on the backend , referring to [ONNX Runtime](../backends/onnxruntime.md), [TensorRT](../backends/tensorrt.md), [NCNN](../backends/ncnn.md), [PPL](../backends/ppl.md).
- `type`: Model's backend, including `onnxruntime`, `ncnn`, `ppl`, `tensorrt`.
#### Example
```python
backend_config = dict(
type='tensorrt',
common_config=dict(
fp16_mode=False, log_level=trt.Logger.INFO, max_workspace_size=1 << 30)
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 512, 1024],
opt_shape=[1, 3, 1024, 2048],
max_shape=[1, 3, 2048, 2048])))
])
```
### 4. A complete example of mmcls on TensorRT
Here we provide a complete deployment config from mmcls on TensorRT.
```python
import tensorrt as trt
codebase_config = dict(type='mmcls', task='Classification')
backend_config = dict(
type='tensorrt',
common_config=dict(
fp16_mode=False,
log_level=trt.Logger.INFO,
max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 224, 224],
opt_shape=[4, 3, 224, 224],
max_shape=[64, 3, 224, 224])))])
onnx_config = dict(
type='onnx',
dynamic_axes={
'input': {
0: 'batch',
2: 'height',
3: 'width'
},
'output': {
0: 'batch'
}
},
export_params=True,
keep_initializers_as_inputs=False,
opset_version=11,
save_file='end2end.onnx',
input_names=['input'],
output_names=['output'],
input_shape=[224, 224])
partition_config = None
```
### 5. How to write model config
According to model's codebase, write the model config file. Model's config file is used to initialize the model, referring to [MMClassification](https://github.com/open-mmlab/mmclassification/blob/master/docs/tutorials/config.md), [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs_zh-CN/tutorials/config.md), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation/blob/master/docs_zh-CN/tutorials/config.md), [MMOCR](https://github.com/open-mmlab/mmocr/tree/main/configs), [MMEditing](https://github.com/open-mmlab/mmediting/blob/master/docs_zh-CN/config.md).
### 6. Reminder
None
### 7. FAQs
None

View File

@ -14,7 +14,7 @@
:caption: 教程
tutorials/how_to_convert_model.md
tutorials/how_to_create_config.md
tutorials/how_to_write_config.md
tutorials/how_to_evaluate_a_model.md
tutorials/how_to_test_model.md
tutorials/how_to_support_new_models.md

View File

@ -1 +0,0 @@
## 如何设置config文件

View File

@ -0,0 +1,203 @@
## 如何编写配置文件
这篇文档描述了如何去编写一个部署任务的配置文件。一次部署任务需要两个配置文件,一个是代码库模型的配置文件,另一个是部署任务的配置文件。一个部署任务的配置文件通常包括 `onnx config` `codebase config` `backend config`
<!-- TOC -->
- [如何编写配置文件](#如何编写配置文件)
- [1. 如何编写 onnx 配置](#1-如何编写-onnx-配置)
- [onnx配置参数说明](#onnx配置参数说明)
- [示例](#示例)
- [如果你需要使用动态输入](#如果你需要使用动态输入)
- [示例](#示例-1)
- [2. 如何编写代码库的配置](#2-如何编写代码库的配置)
- [代码库配置参数说明](#代码库配置参数说明)
- [示例](#示例-2)
- [如果你需要拆分模型](#如果你需要拆分模型)
- [示例](#示例-3)
- [各个代码库的任务类型列表](#各个代码库的任务类型列表)
- [3. 如何编写后端的配置](#3-如何编写后端的配置)
- [示例](#示例-4)
- [4. 一个完整的mmcls模型部署在TensorRT的部署配置示例](#4-一个完整的mmcls模型部署在tensorrt的部署配置示例)
- [5. 如何编写模型配置文件](#5-如何编写模型配置文件)
- [6. 注意事项](#6-注意事项)
- [7. 问答](#7-问答)
<!-- TOC -->
### 1. 如何编写 onnx 配置
onnx 的配置主要描述了模型由 pytorch 转换成 onnx 的过程中需要的信息。
#### onnx配置参数说明
- `type` 表示该配置的类型。 默认是 `onnx`
- `export_params` 如果指定,将导出所有参数。 如果要导出未经训练的模型,请将其设置为 False。
- `keep_initializers_as_inputs` 如果为 True导出图中的所有初始值设定项通常对应于参数也将作为输入添加到图中。 如果为 False则初始值设定项不会作为输入添加到图形中而仅将非参数输入添加为输入。
- `opset_version` 默认算子集版本是11。
- `save_file` 指明输出的 onnx 文件。
- `input_names` 表示 onnx 计算图的输入节点名。
- `output_names` 表示 onnx 计算图的输出节点名。
- `input_shape` 表示模型输入的尺度。
##### 示例
```python
onnx_config = dict(
type='onnx',
export_params=True,
keep_initializers_as_inputs=False,
opset_version=11,
save_file='end2end.onnx',
input_names=['input'],
output_names=['output'],
input_shape=None)
```
#### 如果你需要使用动态输入
如果你需要使用动态输入作为输入和输出,你需要添加一个描述动态输入的配置在 onnx 的配置中。
- `dynamic_axes` 描述动态输入的信息。
##### 示例
```python
dynamic_axes={
'input': {
0: 'batch',
2: 'height',
3: 'width'
},
'dets': {
0: 'batch',
1: 'num_dets',
},
'labels': {
0: 'batch',
1: 'num_dets',
},
}
```
### 2. 如何编写代码库的配置
代码库的配置是描述模型的基础代码库和任务类型的。
#### 代码库配置参数说明
设置模型的基础代码库,包括
- `type` 模型的基础代码库,包括 `mmcls` `mmdet` `mmseg` `mmocr` `mmedit`
- `task` 模型的任务类型,参考 [各个代码库的任务类型列表](#各个代码库的任务类型列表)。
##### 示例
```python
codebase_config = dict(type='mmcls', task='Classification')
```
#### 如果你需要拆分模型
如果你需要拆分模型,你需要添加一个拆分配置。注意,当前只有 MMDetection 的模型支持拆分。
- `type` 设置模型任务, 参考[各个代码库的任务类型列表](#各个代码库的任务类型列表)。
##### 示例
```python
partition_config = dict(type='single_stage', apply_marks=True)
```
#### 各个代码库的任务类型列表
| 代码库 | 任务 | 是否可拆分 |
| :--------------: | :--------------: | :-------: |
| mmcls | classification | N |
| mmdet | single-stage | Y |
| mmdet | two-stage | Y |
| mmseg | segmentation | N |
| mmocr | text-detection | N |
| mmocr | text-recognition | N |
| mmedit | supe-resolution | N |
### 3. 如何编写后端的配置
后端的配置主要是指定模型运行的后端和提供模型在后端上运行时需要的信息,参考 [ONNX Runtime](../backends/onnxruntime.md) [TensorRT](../backends/tensorrt.md) [NCNN](../backends/ncnn.md) [PPL](../backends/ppl.md)。
- `type`: 运行模型的后端,包括 `onnxruntime` `ncnn` `ppl` `tensorrt`
#### 示例
```python
backend_config = dict(
type='tensorrt',
common_config=dict(
fp16_mode=False, log_level=trt.Logger.INFO, max_workspace_size=1 << 30)
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 512, 1024],
opt_shape=[1, 3, 1024, 2048],
max_shape=[1, 3, 2048, 2048])))
])
```
### 4. 一个完整的mmcls模型部署在TensorRT的部署配置示例
这里我们展示一个完整的 mmcls 模型部署在 TensorRT 的部署配置示例。
```python
import tensorrt as trt
codebase_config = dict(type='mmcls', task='Classification')
backend_config = dict(
type='tensorrt',
common_config=dict(
fp16_mode=False,
log_level=trt.Logger.INFO,
max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 224, 224],
opt_shape=[4, 3, 224, 224],
max_shape=[64, 3, 224, 224])))])
onnx_config = dict(
type='onnx',
dynamic_axes={
'input': {
0: 'batch',
2: 'height',
3: 'width'
},
'output': {
0: 'batch'
}
},
export_params=True,
keep_initializers_as_inputs=False,
opset_version=11,
save_file='end2end.onnx',
input_names=['input'],
output_names=['output'],
input_shape=[224, 224])
partition_config = None
```
### 5. 如何编写模型配置文件
根据模型的代码库,编写模型配置文件。模型的配置文件用于初始化模型,参考 [MMClassification](https://github.com/open-mmlab/mmclassification/blob/master/docs_zh-CN/tutorials/config.md) [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs_zh-CN/tutorials/config.md) [MMSegmentation](https://github.com/open-mmlab/mmsegmentation/blob/master/docs_zh-CN/tutorials/config.md) [MMOCR](https://github.com/open-mmlab/mmocr/tree/main/configs) [MMEditing](https://github.com/open-mmlab/mmediting/blob/master/docs_zh-CN/config.md) 。
### 6. 注意事项
None
### 7. 问答
None