[Feature]: add a tool to generate supported-backends markdown table (#1374)

* convert2markdown

* update yaml2mardown code

* code update

* add parse_args

* add parse_args

* add parse_args

* add parse_args

* add website list

* add website list

* add website list

* add website list

* add website list

* add website list

* add website list

* add url in yaml

* add table in convert

* add table in convert

* From yaml export markdown

* From yaml export markdown

* From yaml export markdown

* From yaml export markdown

* From yaml export markdown

* From yaml export markdown

* Rename convert.py to generate_md_table.py

generate_markdownd_table

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* Update mmaction.yml

* add backends parser

* add backends parser

* Add type for the codeblock.

* move to useful tools
This commit is contained in:
kaizhong 2023-01-18 16:32:26 +08:00 committed by GitHub
parent 968b4b0b60
commit bce276ef24
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 161 additions and 0 deletions

View File

@ -202,3 +202,38 @@ And the output look like this:
| Max | 1.689 | 591.983 |
+--------+------------+---------+
```
## generate_md_table
This tool can be used to generate supported-backends markdown table.
### Usage
```shell
python tools/generate_md_table.py \
${yml_file} \
${output} \
${backends}
```
### Description of all arguments
- `yml_file:` input yml config path
- `output:` output markdown file path
- `backends:` output backends list. If not specified, it will be set 'onnxruntime' 'tensorrt' 'torchscript' 'pplnn' 'openvino' 'ncnn'.
### Example:
Generate backends markdown table from mmocr.yml
```shell
python tools/generate_md_table.py tests/regression/mmocr.yml tests/regression/mmocr.md onnxruntime tensorrt torchscript pplnn openvino ncnn
```
And the output look like this:
| model | task | onnxruntime | tensorrt | torchscript | pplnn | openvino | ncnn |
| :--------------------------------------------------------------------------- | :-------------- | :---------- | :------- | :---------- | :---- | :------- | :--- |
| [DBNet](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/dbnet) | TextDetection | Y | Y | Y | Y | Y | Y |
| [CRNN](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/crnn) | TextRecognition | Y | Y | Y | Y | N | Y |
| [SAR](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/sar) | TextRecognition | Y | N | N | N | N | N |

View File

@ -202,3 +202,38 @@ python tools/profiler.py \
| Max | 1.689 | 591.983 |
+--------+------------+---------+
```
## generate_md_table
生成mmdeploy支持的后端表。
### 用法
```shell
python tools/generate_md_table.py \
${yml_file} \
${output} \
${backends}
```
### 参数说明
- `yml_file:` 输入 yml 配置路径
- `output:` 输出markdown文件路径
- `backends:` 要输出的后端,默认为 onnxruntime tensorrt torchscript pplnn openvino ncnn
### 使用举例
从 mmocr.yml 生成mmdeploy支持的后端表
```shell
python tools/generate_md_table.py tests/regression/mmocr.yml tests/regression/mmocr.md onnxruntime tensorrt torchscript pplnn openvino ncnn
```
输出:
| model | task | onnxruntime | tensorrt | torchscript | pplnn | openvino | ncnn |
| :--------------------------------------------------------------------------- | :-------------- | :---------- | :------- | :---------- | :---- | :------- | :--- |
| [DBNet](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/dbnet) | TextDetection | Y | Y | Y | Y | Y | Y |
| [CRNN](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/crnn) | TextRecognition | Y | Y | Y | Y | N | Y |
| [SAR](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/sar) | TextRecognition | Y | N | N | N | N | N |

View File

@ -1,5 +1,6 @@
asynctest
coverage
easydict
flake8
interrogate
isort==4.3.21

View File

@ -1,4 +1,5 @@
globals:
repo_url: https://github.com/open-mmlab/mmaction2/tree/master
codebase_dir: ../mmaction2
checkpoint_force_download: False
images:

View File

@ -1,4 +1,5 @@
globals:
repo_url: https://github.com/open-mmlab/mmclassification/tree/master
codebase_dir: ../mmclassification
checkpoint_force_download: False
images:

View File

@ -1,4 +1,5 @@
globals:
repo_url: https://github.com/open-mmlab/mmdetection/tree/master
codebase_dir: ../mmdetection
checkpoint_force_download: False
images:

View File

@ -1,4 +1,5 @@
globals:
repo_url: https://github.com/open-mmlab/mmdetection3d/tree/master
codebase_dir: ../mmdetection3d
checkpoint_force_download: False
images:

View File

@ -1,4 +1,5 @@
globals:
repo_url: https://github.com/open-mmlab/mmediting/tree/master
codebase_dir: ../mmediting
checkpoint_force_download: False
images:

View File

@ -1,4 +1,5 @@
globals:
repo_url: https://github.com/open-mmlab/mmocr/tree/main
codebase_dir: ../mmocr
checkpoint_force_download: False
images:

View File

@ -1,4 +1,5 @@
globals:
repo_url: https://github.com/open-mmlab/mmpose/tree/master
codebase_dir: ../mmpose
checkpoint_force_download: False
images:

View File

@ -1,4 +1,5 @@
globals:
repo_url: https://github.com/open-mmlab/mmrotate/tree/main
codebase_dir: ../mmrotate
checkpoint_force_download: False
images:

View File

@ -1,4 +1,5 @@
globals:
repo_url: https://github.com/open-mmlab/mmsegmentation/tree/master
codebase_dir: ../mmsegmentation
checkpoint_force_download: False
images:

View File

@ -0,0 +1,81 @@
# Copyright (c) OpenMMLab. All rights reserved.
import argparse
import os
import os.path as osp
import yaml
from easydict import EasyDict as edict
from mmdeploy.utils import get_backend, get_task_type, load_config
def parse_args():
parser = argparse.ArgumentParser(
description='from yaml export markdown table')
parser.add_argument('yml_file', help='input yml config path')
parser.add_argument('output', help='output markdown file path')
parser.add_argument(
'backends',
nargs='*',
help='backends you want to generate',
default=[
'onnxruntime', 'tensorrt', 'torchscript', 'pplnn', 'openvino',
'ncnn'
])
args = parser.parse_args()
return args
def main():
args = parse_args()
assert osp.exists(args.yml_file), f'File not exists: {args.yml_file}'
output_dir, _ = osp.split(args.output)
if output_dir:
os.makedirs(output_dir, exist_ok=True)
header = ['model', 'task'] + args.backends
aligner = [':--'] * 2 + [':--'] * len(args.backends)
def write_row_f(writer, row):
writer.write('|' + '|'.join(row) + '|\n')
print(f'Processing{args.yml_file}')
with open(args.yml_file, 'r') as reader, open(args.output, 'w') as writer:
config = yaml.load(reader, Loader=yaml.FullLoader)
config = edict(config)
write_row_f(writer, header)
write_row_f(writer, aligner)
repo_url = config.globals.repo_url
for i in range(len(config.models)):
name = config.models[i].name
model_configs = config.models[i].model_configs
pipelines = config.models[i].pipelines
config_url = osp.join(repo_url, model_configs[0])
config_url, _ = osp.split(config_url)
support_backends = {b: 'N' for b in args.backends}
deploy_config = [
pipelines[i].deploy_config for i in range(len(pipelines))
]
cfg = [
load_config(deploy_config[i])
for i in range(len(deploy_config))
]
task = [
get_task_type(cfg[i][0]).value
for i in range(len(deploy_config))
]
backend_type = [
get_backend(cfg[i][0]).value
for i in range(len(deploy_config))
]
for i in range(len(deploy_config)):
support_backends[backend_type[i]] = 'Y'
support_backends = [support_backends[i] for i in args.backends]
model_name = f'[{name}]({config_url})'
row = [model_name, task[i]] + support_backends
write_row_f(writer, row)
print(f'Save to {args.output}')
if __name__ == '__main__':
main()