mmdeploy/docs/tutorials/how_to_write_config.md
VVsssssk 985bb6ad34
[Doc]How to write config (#139)
* add ncnn test exporter in test_ops.py

* add ncnn test exporter in utils.py

* add onnxruntime and tensorrt ops test

* fix blank line

* fix comment
add nms ops test

* remove nms test

* add test sample
add dockerstring

* remove nms test

* fix grid_sample
add type hind

* fix problem

* fix dockerstring

* add nms batch_nms multi_level_roi_align

* add test data

* fix problem

* rm pkl file dependent

* rm file

* add docstring

* remove multi_level_dependce

* add mmseg module unittest

* add mmseg test

* add mmseg model unit test

* fix blankline

* rename file

* add syncbn2bn unit test

* add apis/export

* lint

* lint

* ??

* delete#

* fix problems

* add mmcv unit test

* add docs about how to create config file

* fix :

* add zh docs about how to create config

* add full example

* fix comment

* add note

* fix problem

* fix catalog

* fix catalog`

* fix catalog

* fix docs

* fix cn docs

* fix lint

* fix docs

* fix space

* add mmocr link

* fix problem

* fix new

Co-authored-by: SingleZombie <singlezombie@163.com>
2021-10-29 18:04:11 +08:00

6.7 KiB

How to write config

This tutorial describes how to write a config for model conversion and deployment. A deployment config includes onnx config, codebase config, backend config.

1. How to write onnx config

Onnx config to describe how to export a model from pytorch to onnx.

Description of onnx config arguments

  • type: Type of config dict. Default is onnx.
  • export_params: If specified, all parameters will be exported. Set this to False if you want to export an untrained model.
  • keep_initializers_as_inputs: If True, all the initializers (typically corresponding to parameters) in the exported graph will also be added as inputs to the graph. If False, then initializers are not added as inputs to the graph, and only the non-parameter inputs are added as inputs.
  • opset_version: Opset_version is 11 by default.
  • save_file: Output onnx file.
  • input_names: Names to assign to the input nodes of the graph.
  • output_names: Names to assign to the output nodes of the graph.
  • input_shape: The height and width of input tensor to the model.
Example
onnx_config = dict(
    type='onnx',
    export_params=True,
    keep_initializers_as_inputs=False,
    opset_version=11,
    save_file='end2end.onnx',
    input_names=['input'],
    output_names=['output'],
    input_shape=None)

If you need to use dynamic axes

If the dynamic shape of inputs and outputs is required, you need to add dynamic_axes dict in onnx config.

  • dynamic_axes: Describe the dimensional information about input and output.
Example
    dynamic_axes={
        'input': {
            0: 'batch',
            2: 'height',
            3: 'width'
        },
        'dets': {
            0: 'batch',
            1: 'num_dets',
        },
        'labels': {
            0: 'batch',
            1: 'num_dets',
        },
    }

2. How to write codebase config

Codebase config part contains information like codebase type and task type.

Description of codebase config arguments

Example
codebase_config = dict(type='mmcls', task='Classification')

If you need to use the partition model

If you want to partition model , you need to add partition configuration dict. Note that currently only the MMDetection model supports partitioning.

Example
partition_config = dict(type='single_stage', apply_marks=True)

List of tasks in all codebases

codebase task partition
mmcls classification N
mmdet single-stage Y
mmdet two-stage Y
mmseg segmentation N
mmocr text-detection N
mmocr text-recognition N
mmedit supe-resolution N

3. How to write backend config

The backend config is mainly used to specify the backend on which model runs and provide the information needed when the model runs on the backend , referring to ONNX Runtime, TensorRT, NCNN, PPL.

  • type: Model's backend, including onnxruntime, ncnn, ppl, tensorrt.

Example

backend_config = dict(
    type='tensorrt',
    common_config=dict(
        fp16_mode=False, log_level=trt.Logger.INFO, max_workspace_size=1 << 30)
    model_inputs=[
        dict(
            input_shapes=dict(
                input=dict(
                    min_shape=[1, 3, 512, 1024],
                    opt_shape=[1, 3, 1024, 2048],
                    max_shape=[1, 3, 2048, 2048])))
    ])

4. A complete example of mmcls on TensorRT

Here we provide a complete deployment config from mmcls on TensorRT.

import tensorrt as trt

codebase_config = dict(type='mmcls', task='Classification')

backend_config = dict(
    type='tensorrt',
    common_config=dict(
        fp16_mode=False,
        log_level=trt.Logger.INFO,
        max_workspace_size=1 << 30),
    model_inputs=[
        dict(
            input_shapes=dict(
                input=dict(
                    min_shape=[1, 3, 224, 224],
                    opt_shape=[4, 3, 224, 224],
                    max_shape=[64, 3, 224, 224])))])

onnx_config = dict(
    type='onnx',
    dynamic_axes={
        'input': {
            0: 'batch',
            2: 'height',
            3: 'width'
        },
        'output': {
            0: 'batch'
        }
    },
    export_params=True,
    keep_initializers_as_inputs=False,
    opset_version=11,
    save_file='end2end.onnx',
    input_names=['input'],
    output_names=['output'],
    input_shape=[224, 224])

partition_config = None

5. How to write model config

According to model's codebase, write the model config file. Model's config file is used to initialize the model, referring to MMClassification, MMDetection, MMSegmentation, MMOCR, MMEditing.

6. Reminder

None

7. FAQs

None