[Enhancement] Improve MMDeploy Regression test (#425)

* make regression test as a module under project

* using `--codebase` instead of  `--deploy-yml`

* Improve doc for `--codebase`

* Add shorter arg  `-p` for `--performance`

* make `checopoint-dir` as an arg for the script

* Gen error log when convert fail.

* Improve res code for testing

* Doc add sdk test detail

* Doc add env setup

* Fix lint

* Fix doc lint

* Improve model path in report

* Improve report title

* Improve report checkpoint path

* Fix lint

* move test yaml under `tests/regression`

* Improve the test yaml path

* Fix lint

* Improve doc

* make func `update_report` code better

* move doc to new location

* Fix arg

* Update arg details

* Use cpu when openvino and onnxruntime cpu package

* Fix word

* Fix func of openpyxl 3.0.9

* Add some info

* Fix lint

* Fix filename

* Fix doc link

* Fix dir name with space when is not sdk

* Add args `--models` for test specific model(s)

* not saving report when no model in codebase when using `--models`

* Fix doc

* Fix lint

* Add table for metric in doc

* Improve table for doc

* Using `None` install of `['all']`

* Improce doc

* set device type properly

* Increate popen bufsize

* Add `precision_type` in `work-dir`

* Fix popen stuck

* Fix lint

* Fix lint

* Fix popen stuck by using file handler

* Make metric dataset as a list

* Update mmseg.yml

* Remove 'FPS' in the report

* Update do_regression_test.md

* Improve log

* Fix codespell

* Fix doc

* ncnn only save `xxx.param` as checkpoint name in the report

Co-authored-by: maningsheng <mnsheng@yeah.net>
This commit is contained in:
HinGwenWoong 2022-05-27 17:08:32 +08:00 committed by GitHub
parent 9e6a3c8ec5
commit aabab46d8a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 275 additions and 176 deletions

View File

@ -6,41 +6,62 @@
<!-- TOC -->
- [如何进行回归测试](#如何进行回归测试)
- [1. 用法](#1-用法)
- [1. 环境搭建](#1-环境搭建)
- [MMDeploy的安装及配置](#MMDeploy的安装及配置)
- [Python环境依赖](#Python环境依赖)
- [2. 用法](#2-用法)
- [参数解析](#参数解析)
- [示例](#示例)
- [2. 回归测试配置文件](#2-回归测试配置文件)
- [3. 回归测试配置文件](#3-回归测试配置文件)
- [示例及参数解析](#示例及参数解析)
- [3. 生成的报告](#3-生成的报告)
- [4. 生成的报告](#4-生成的报告)
- [模板](#模板)
- [示例](#示例)
- [4. 支持的后端](#4-支持的后端)
- [5. 支持的Codebase及其Metric](#5-支持的Codebase及其Metric)
- [6. 注意事项](#7-注意事项)
- [7. 常见问题](#8-常见问题)
- [5. 支持的后端](#4-支持的后端)
- [6. 支持的Codebase及其Metric](#5-支持的Codebase及其Metric)
- [7. 注意事项](#7-注意事项)
- [8. 常见问题](#8-常见问题)
<!-- TOC -->
## 1. 用法
## 1. 环境搭建
### MMDeploy的安装及配置
本章节的内容,需要提前根据[build 文档](../01-how-to-build/build_from_source.md)将 MMDeploy 安装配置好之后,才能进行。
### Python环境依赖
需要安装 test 的环境
```shell
pip install -r requirements/tests.txt
```
如果在使用过程是 numpy 报错,则更新一下 numpy
```shell
pip install -U numpy
```
## 2. 用法
```shell
python ./tools/regression_test.py \
--deploy-yml "${DEPLOY_YML_PATH}" \
--codebase "${CODEBASE_NAME}" \
--backends "${BACKEND}" \
[--models "${MODELS}"] \
--work-dir "${WORK_DIR}" \
--device "${DEVICE}" \
--log-level INFO \
[--performance]
[--performance 或 -p]
```
### 参数解析
- `--deploy-yml` : 需要测试的 codebaseeg.`configs/mmdet/mmdet_regression_test.yaml`,如果设置为 `all` 即全部测试。
- `--backends` : 筛选测试的后端, 默认 `all`: 测全部`backend`, 也可传入若干个后端,例如 `onnxruntime tesnsorrt`
- `--codebase` : 需要测试的 codebaseeg.`mmdet`, 测试多个 `mmcls mmdet ...`
- `--backends` : 筛选测试的后端, 默认测全部`backend`, 也可传入若干个后端,例如 `onnxruntime tesnsorrt`。如果需要一同进行 SDK 的测试,需要在 `tests/regression/${codebase}.yml` 里面的 `sdk_config` 进行配置。
- `--models` : 指定测试的模型, 默认测试 `yml` 中所有模型, 也可传入若干个模型名称,例如 `ResNet SE-ResNet "Mask R-CNN"`。注意的是,如果模型名称有 ` ` 则需要像例子中的`"Mask R-CNN"`使用双引号包着它。
- `--work-dir` : 模型转换、报告生成的路径。
- `--device` : 使用的设备,默认 `cuda`
- `--log-level` : 设置日记的等级,选项包括`'CRITICAL' 'FATAL' 'ERROR' 'WARN' 'WARNING' 'INFO' 'DEBUG' 'NOTSET'`。默认是`INFO`
- `--performance` : 是否测试精度,加上则测试转换+精度,不加上则只测试转换
- `-p` 或 `--performance` : 是否测试精度,加上则测试转换+精度,不加上则只测试转换
### 注意事项
对于 Windows 用户:
@ -49,56 +70,64 @@ python ./tools/regression_test.py \
## 例子
1. 测试 mmdet 和 mmpose 的所有 backend 的 转换+精度
1. 测试 mmdet 和 mmpose 的所有 backend 的 **转换+精度**
```shell
python ./tools/regression_test.py \
--deploy-yml ./configs/mmdet/mmdet_regression_test.yaml ./configs/mmpose/mmpose_regression_test.yaml \
--backends all \
--codebase mmdet mmpose \
--work-dir "../mmdeploy_regression_working_dir" \
--device "cuda" \
--log-level INFO \
--performance
```
2. 测试 mmdet 和 mmpose 的某几个 backend 的 转换+精度
2. 测试 mmdet 和 mmpose 的某几个 backend 的 **转换+精度**
```shell
python ./tools/regression_test.py \
--deploy-yml ./configs/mmdet/mmdet_regression_test.yaml ./configs/mmdet/mmpose.yaml \
--codebase mmdet mmpose \
--backends onnxruntime tesnsorrt \
--work-dir "../mmdeploy_regression_working_dir" \
--device "cuda" \
--log-level INFO \
--performance
-p
```
3. 测试 mmdet 和 mmpose 的某几个 backend测试转换
3. 测试 mmdet 和 mmpose 的某几个 backend**只测试转换**
```shell
python ./tools/regression_test.py \
--deploy-yml ./configs/mmdet/mmdet_regression_test.yaml ./configs/mmdet/mmpose.yaml \
--codebase mmdet mmpose \
--backends onnxruntime tesnsorrt \
--work-dir "../mmdeploy_regression_working_dir" \
--device "cuda" \
--log-level INFO
```
## 2. 回归测试配置文件
4.测试 mmdet 和 mmcls 的某几个 models**只测试转换**
```shell
python ./tools/regression_test.py \
--codebase mmdet mmpose \
--models ResNet SE-ResNet "Mask R-CNN" \
--work-dir "../mmdeploy_regression_working_dir" \
--device "cuda" \
--log-level INFO
```
## 3. 回归测试配置文件
### 示例及参数解析
```yaml
globals:
codebase_name: mmocr # 回归测试的 codebase 名称
codebase_dir: ../mmocr # 回归测试的 codebase 路径
checkpoint_force_download: False # 回归测试是否重新下载模型即使其已经存在
checkpoint_dir: ../mmdeploy_checkpoints # 回归测试是否下载模型的路径
images: # 测试使用图片
img_224x224: &img_224x224 ./tests/data/tiger.jpeg
img_300x300: &img_300x300
img_800x1344: &img_cityscapes_800x1344
img_blank: &img_blank
img_densetext_det: &img_densetext_det ../mmocr/demo/demo_densetext_det.jpg
img_demo_text_det: &img_demo_text_det ../mmocr/demo/demo_text_det.jpg
img_demo_text_ocr: &img_demo_text_ocr ../mmocr/demo/demo_text_ocr.jpg
img_demo_text_recog: &img_demo_text_recog ../mmocr/demo/demo_text_recog.jpg
metric_info: &metric_info # 指标参数
hmean-iou: # 命名根据 metafile.Results.Metrics
eval_name: hmean-iou # 命名根据 test.py --metrics args 入参名称
@ -112,9 +141,12 @@ globals:
tolerance: 0.2
task_name: Text Recognition
dataset: IIIT5K
convert_image: &convert_image # 转换会使用到的图片
input_img: *img_224x224
test_img: *img_300x300
convert_image_det: &convert_image_det # det转换会使用到的图片
input_img: *img_densetext_det
test_img: *img_demo_text_det
convert_image_rec: &convert_image_rec
input_img: *img_demo_text_recog
test_img: *img_demo_text_recog
backend_test: &default_backend_test True # 是否对 backend 进行精度测试
sdk: # SDK 配置文件
sdk_detection_dynamic: &sdk_detection_dynamic configs/mmocr/text-detection/text-detection_sdk_dynamic.py
@ -122,30 +154,30 @@ globals:
onnxruntime:
pipeline_ort_recognition_static_fp32: &pipeline_ort_recognition_static_fp32
convert_image: *convert_image # 转换过程中使用的图片
convert_image: *convert_image_rec # 转换过程中使用的图片
backend_test: *default_backend_test # 是否进行后端测试,存在则判断,不存在则视为 False
sdk_config: *sdk_recognition_dynamic # 是否进行SDK测试存在则使用特定的 SDK config 进行测试,不存在则视为不进行 SDK 测试
deploy_config: configs/mmocr/text-recognition/text-recognition_onnxruntime_static.py # 使用的 deploy cfg 路径,基于 mmdeploy 的路径
pipeline_ort_recognition_dynamic_fp32: &pipeline_ort_recognition_dynamic_fp32
convert_image: *convert_image
convert_image: *convert_image_rec
backend_test: *default_backend_test
sdk_config: *sdk_recognition_dynamic
deploy_config: configs/mmocr/text-recognition/text-recognition_onnxruntime_dynamic.py
pipeline_ort_detection_dynamic_fp32: &pipeline_ort_detection_dynamic_fp32
convert_image: *convert_image
convert_image: *convert_image_det
deploy_config: configs/mmocr/text-detection/text-detection_onnxruntime_dynamic.py
tensorrt:
pipeline_trt_recognition_dynamic_fp16: &pipeline_trt_recognition_dynamic_fp16
convert_image: *convert_image
convert_image: *convert_image_rec
backend_test: *default_backend_test
sdk_config: *sdk_recognition_dynamic
deploy_config: configs/mmocr/text-recognition/text-recognition_tensorrt-fp16_dynamic-1x32x32-1x32x640.py
pipeline_trt_detection_dynamic_fp16: &pipeline_trt_detection_dynamic_fp16
convert_image: *convert_image
convert_image: *convert_image_det
backend_test: *default_backend_test
sdk_config: *sdk_detection_dynamic
deploy_config: configs/mmocr/text-detection/text-detection_tensorrt-fp16_dynamic-320x320-1024x1824.py
@ -177,31 +209,38 @@ models:
pipelines:
- *pipeline_ort_detection_dynamic_fp32
- *pipeline_trt_detection_dynamic_fp16
# 特殊的 pipeline 可以这样加入
- convert_image: xxx
backend_test: xxx
sdk_config: xxx
deploy_config: configs/mmocr/text-detection/xxx
```
## 3. 生成的报告
## 4. 生成的报告
### 模板
|| model_name | model_config | task_name | model_checkpoint_name | dataset | backend_name | deploy_config | static_or_dynamic | precision_type | conversion_result | fps | metric_1 | metric_2 | metric_n | test_pass |
|------------|--------------|-----------------|-----------------------|----------|--------------|---------------|-------------------|----------------|-------------------|---|----------|----------|-----------|-----------|-----|
| 序号 | 模型名称 | model config 路径 | 执行的 task name | `.pth`模型路径 | 数据集名称 | 后端名称 | deploy cfg 路径 | 动态 or 静态 | 测试精度 | 模型转换结果 | FPS 数值 | 指标 1 数值 | 指标 2 数值 | 指标 n 数值 | 后端测试结果 |
|| Model | Model Config | Task | Checkpoint | Dataset | Backend | Deploy Config | Static or Dynamic | Precision Type | Conversion Result | metric_1 | metric_2 | metric_n | Test Pass |
|------------|--------------|-----------------|-----------------------|----------|--------------|---------------|-------------------|----------------|-------------------|---|----------|----------|-----------|-----------|
| 序号 | 模型名称 | model config 路径 | 执行的 task name | `.pth`模型路径 | 数据集名称 | 后端名称 | deploy cfg 路径 | 动态 or 静态 | 测试精度 | 模型转换结果 | 指标 1 数值 | 指标 2 数值 | 指标 n 数值 | 后端测试结果 |
### 示例
这是 MMOCR 生成的报告
|| model_name | model_config | task_name | model_checkpoint_name | dataset | backend_name | deploy_config | static_or_dynamic | precision_type | conversion_result | fps | hmean-iou | word_acc | test_pass |
| ---- | ---------- | ------------------------------------------------------------ | ---------------- | ------------------------------------------------------------ | --------- | --------------- | ------------------------------------------------------------ | ----------------- | -------------- | ----------------- |-----------|----------|-----------| --------- |
| 0 | crnn | ../mmocr/configs/textrecog/crnn/crnn_academic_dataset.py | Text Recognition | ../mmdeploy_checkpoints/mmocr/crnn/crnn_academic-a723a1c5.pth | IIIT5K | Pytorch| -| - | - | - | - | - | 80.5 | -|
| 1 | crnn | ../mmocr/configs/textrecog/crnn/crnn_academic_dataset.py | Text Recognition | ${WORK_DIR}/mmocr/crnn/onnxruntime/static/crnn_academic-a723a1c5/end2end.onnx | x| onnxruntime | configs/mmocr/text-recognition/text-recognition_onnxruntime_dynamic.py | static | fp32 | True | 182.21 | - | 80.67 | True|
| 2 | crnn | ../mmocr/configs/textrecog/crnn/crnn_academic_dataset.py | Text Recognition | ${WORK_DIR}/mmocr/crnn/onnxruntime/static/crnn_academic-a723a1c5 | x| SDK-onnxruntime | configs/mmocr/text-recognition/text-recognition_sdk_dynamic.py | static | fp32 | True | x | - | x | False |
| 3 | dbnet| ../mmocr/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py | Text Detection | ../mmdeploy_checkpoints/mmocr/dbnet/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597.pth | ICDAR2015 | Pytorch| -| - | - | - | - | 0.795 | - | -|
| 4 | dbnet| ../mmocr/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py | Text Detection | ../mmdeploy_checkpoints/mmocr/dbnet/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597.pth | ICDAR | onnxruntime | configs/mmocr/text-detection/text-detection_onnxruntime_dynamic.py | dynamic | fp32 | True | - | - | - | True|
| 5 | dbnet| ../mmocr/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py | Text Detection | ${WORK_DIR}/mmocr/dbnet/tensorrt/dynamic/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597/end2end.engine | ICDAR | tensorrt | configs/mmocr/text-detection/text-detection_tensorrt-fp16_dynamic-320x320-1024x1824.py | dynamic | fp16 | True | 229.06 | 0.793302 | - | True|
| 6 | dbnet| ../mmocr/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py | Text Detection | ${WORK_DIR}/mmocr/dbnet/tensorrt/dynamic/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597 | ICDAR | SDK-tensorrt | configs/mmocr/text-detection/text-detection_sdk_dynamic.py | dynamic | fp16 | True | 140.06 | 0.795073 | - | True|
| | Model | Model Config | Task | Checkpoint | Dataset | Backend | Deploy Config | Static or Dynamic | Precision Type | Conversion Result | hmean-iou | word_acc | Test Pass |
|-----| ---------- | ------------------------------------------------------------ | ---------------- | ------------------------------------------------------------ | --------- | --------------- | ------------------------------------------------------------ | ----------------- | -------------- | ----------------- |------------| ---------- | --------- |
| 0 | crnn | ../mmocr/configs/textrecog/crnn/crnn_academic_dataset.py | Text Recognition | ../mmdeploy_checkpoints/mmocr/crnn/crnn_academic-a723a1c5.pth | IIIT5K | Pytorch| -| - | - | - | - | 80.5 | -|
| 1 | crnn | ../mmocr/configs/textrecog/crnn/crnn_academic_dataset.py | Text Recognition | ${WORK_DIR}/mmocr/crnn/onnxruntime/static/crnn_academic-a723a1c5/end2end.onnx | x| onnxruntime | configs/mmocr/text-recognition/text-recognition_onnxruntime_dynamic.py | static | fp32 | True | - | 80.67 | True|
| 2 | crnn | ../mmocr/configs/textrecog/crnn/crnn_academic_dataset.py | Text Recognition | ${WORK_DIR}/mmocr/crnn/onnxruntime/static/crnn_academic-a723a1c5 | x| SDK-onnxruntime | configs/mmocr/text-recognition/text-recognition_sdk_dynamic.py | static | fp32 | True | - | x | False |
| 3 | dbnet| ../mmocr/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py | Text Detection | ../mmdeploy_checkpoints/mmocr/dbnet/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597.pth | ICDAR2015 | Pytorch| -| - | - | - | 0.795 | - | -|
| 4 | dbnet| ../mmocr/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py | Text Detection | ../mmdeploy_checkpoints/mmocr/dbnet/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597.pth | ICDAR | onnxruntime | configs/mmocr/text-detection/text-detection_onnxruntime_dynamic.py | dynamic | fp32 | True | - | - | True|
| 5 | dbnet| ../mmocr/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py | Text Detection | ${WORK_DIR}/mmocr/dbnet/tensorrt/dynamic/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597/end2end.engine | ICDAR | tensorrt | configs/mmocr/text-detection/text-detection_tensorrt-fp16_dynamic-320x320-1024x1824.py | dynamic | fp16 | True | 0.793302 | - | True|
| 6 | dbnet| ../mmocr/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py | Text Detection | ${WORK_DIR}/mmocr/dbnet/tensorrt/dynamic/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597 | ICDAR | SDK-tensorrt | configs/mmocr/text-detection/text-detection_sdk_dynamic.py | dynamic | fp16 | True | 0.795073 | - | True|
## 4. 支持的后端
## 5. 支持的后端
- [x] ONNX Runtime
- [x] TensorRT
- [x] PPLNN
@ -210,19 +249,27 @@ models:
- [x] TorchScript
- [x] MMDeploy SDK
## 5. 支持的Codebase及其Metric
- [x] mmdet
- [x] bbox
- [x] mmcls
- [x] accuracy
- [x] mmseg
- [x] mIoU
- [x] mmpose
- [x] AR
- [x] AP
- [x] mmocr
- [x] hmean
- [x] acc
- [x] mmedit
- [x] PSNR
- [x] SSIM
## 6. 支持的Codebase及其Metric
| Codebase | Metric | Support |
|----------| ---------- |-------------------|
| mmdet | bbox | :heavy_check_mark: |
| | segm | :heavy_check_mark: |
| | PQ | :x: |
| mmcls | accuracy | :heavy_check_mark: |
| mmseg | mIoU | :heavy_check_mark: |
| mmpose | AR | :heavy_check_mark: |
| | AP | :heavy_check_mark: |
| mmocr | hmean | :heavy_check_mark: |
| | acc | :heavy_check_mark: |
| mmedit | PSNR | :heavy_check_mark: |
| | SSIM | :heavy_check_mark: |
## 7. 注意事项
暂无
## 8. 常见问题
暂无

View File

@ -3,7 +3,7 @@ coverage
flake8
interrogate
isort==4.3.21
openpyxl
openpyxl==3.0.9
pandas
pytest
xlrd==1.2.0

View File

@ -1,8 +1,6 @@
globals:
codebase_name: mmcls
codebase_dir: ../mmclassification
checkpoint_force_download: False
checkpoint_dir: ../mmdeploy_checkpoints
images:
img_snake: &img_snake ../mmclassification/demo/demo.JPEG
img_bird: &img_bird ../mmclassification/demo/bird.JPEG

View File

@ -1,8 +1,6 @@
globals:
codebase_name: mmdet
codebase_dir: ../mmdetection
checkpoint_force_download: False
checkpoint_dir: ../mmdeploy_checkpoints
images:
input_img: &input_img ../mmdetection/demo/demo.jpg
test_img: &test_img ./tests/data/tiger.jpeg

View File

@ -1,8 +1,6 @@
globals:
codebase_name: mmedit
codebase_dir: ../mmediting
checkpoint_force_download: False
checkpoint_dir: ../mmdeploy_checkpoints
images:
img_face: &img_face ../mmediting/tests/data/face/000001.png
img_bg: &img_bg ../mmediting/tests/data/bg/GT26r.jpg

View File

@ -1,8 +1,6 @@
globals:
codebase_name: mmocr
codebase_dir: ../mmocr
checkpoint_force_download: False
checkpoint_dir: ../mmdeploy_checkpoints
images:
img_densetext_det: &img_densetext_det ../mmocr/demo/demo_densetext_det.jpg
img_demo_text_det: &img_demo_text_det ../mmocr/demo/demo_text_det.jpg

View File

@ -1,8 +1,6 @@
globals:
codebase_name: mmpose
codebase_dir: ../mmpose
checkpoint_force_download: False
checkpoint_dir: ../mmdeploy_checkpoints
images:
img_human_pose: &img_human_pose ../mmpose/tests/data/coco/000000000785.jpg
img_human_pose_256x192: &img_human_pose_256x192 ./demo/resources/human-pose.jpg

View File

@ -1,8 +1,6 @@
globals:
codebase_name: mmseg
codebase_dir: ../mmsegmentation
checkpoint_force_download: False
checkpoint_dir: ../mmdeploy_checkpoints
images:
img_leftImg8bit: &img_leftImg8bit ../mmsegmentation/tests/data/pseudo_cityscapes_dataset/leftImg8bit/frankfurt_000000_000294_leftImg8bit.png
img_loveda_0: &img_loveda_0 ../mmsegmentation/tests/data/pseudo_loveda_dataset/img_dir/0.png
@ -16,7 +14,7 @@ globals:
metric_key: mIoU # eval OrderedDict key name
tolerance: 5 # metric ±n%
task_name: Semantic Segmentation # metafile.Results.Task
dataset: Cityscapes # metafile.Results.Dataset
dataset: [Cityscapes, ADE20K] # metafile.Results.Dataset
convert_image: &convert_image
input_img: *img_leftImg8bit
test_img: *img_loveda_0

View File

@ -1,7 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
import argparse
import logging
import os
import subprocess
from collections import OrderedDict
from pathlib import Path
@ -20,32 +20,29 @@ from mmdeploy.utils import (get_backend, get_codebase, get_root_logger,
def parse_args():
parser = argparse.ArgumentParser(description='Regression Test')
parser.add_argument(
'--deploy-yml',
'--codebase',
nargs='+',
help='regression test yaml path.',
default=[
'./configs/mmcls/mmcls_regression_test.yaml',
'./configs/mmdet/mmdet_regression_test.yaml',
'./configs/mmseg/mmseg_regression_test.yaml',
'./configs/mmpose/mmpose_regression_test.yaml',
'./configs/mmocr/mmocr_regression_test.yaml',
'./configs/mmedit/mmedit_regression_test.yaml'
])
default=['mmcls', 'mmdet', 'mmseg', 'mmpose', 'mmocr', 'mmedit'])
parser.add_argument(
'-p',
'--performance',
default=False,
action='store_true',
help='test performance if it set')
parser.add_argument(
'--backends',
nargs='+',
help='test specific backend(s)',
default=['all'])
'--backends', nargs='+', help='test specific backend(s)')
parser.add_argument('--models', nargs='+', help='test specific model(s)')
parser.add_argument(
'--work-dir',
type=str,
help='the dir to save logs and models',
default='../mmdeploy_regression_working_dir')
parser.add_argument(
'--checkpoint-dir',
type=str,
help='the dir to save checkpoint for all model',
default='../mmdeploy_checkpoints')
parser.add_argument(
'--device', type=str, help='Device type, cuda or cpu', default='cuda')
parser.add_argument(
@ -101,7 +98,7 @@ def merge_report(work_dir: str, logger: logging.Logger):
# delete if sheet already exist
if sheet_name in wb.sheetnames:
wb.remove_sheet(wb[sheet_name])
wb.remove(wb[sheet_name])
# create sheet
wb.create_sheet(title=sheet_name, index=0)
# write in row
@ -112,10 +109,12 @@ def merge_report(work_dir: str, logger: logging.Logger):
for name in wb.sheetnames:
ws = wb[name]
if ws.cell(1, 1).value is None:
wb.remove_sheet(ws)
wb.remove(ws)
# save to file
wb.save(str(res_file))
logger.info('Report merge successful.')
def get_model_metafile_info(global_info: dict, model_info: dict,
logger: logging.Logger):
@ -147,6 +146,7 @@ def get_model_metafile_info(global_info: dict, model_info: dict,
checkpoint_save_dir = Path(checkpoint_dir).joinpath(
codebase_name, model_info.get('name'))
checkpoint_save_dir.mkdir(parents=True, exist_ok=True)
logger.info(f'Saving checkpoint in {checkpoint_save_dir}')
# get model metafile info
metafile_path = Path(codebase_dir).joinpath(model_info.get('metafile'))
@ -188,11 +188,11 @@ def get_model_metafile_info(global_info: dict, model_info: dict,
def update_report(report_dict: dict, model_name: str, model_config: str,
task_name: str, model_checkpoint_name: str, dataset: str,
task_name: str, checkpoint: str, dataset: str,
backend_name: str, deploy_config: str,
static_or_dynamic: str, precision_type: str,
conversion_result: str, fps: str, metric_info: list,
test_pass: str, report_txt_path: Path):
test_pass: str, report_txt_path: Path, codebase_name: str):
"""Update report information.
Args:
@ -200,7 +200,7 @@ def update_report(report_dict: dict, model_name: str, model_config: str,
model_name (str): Model name.
model_config (str): Model config name.
task_name (str): Task name.
model_checkpoint_name (str): Model checkpoint name.
checkpoint (str): Model checkpoint name.
dataset (str): Dataset name.
backend_name (str): Backend name.
deploy_config (str): Deploy config name.
@ -211,44 +211,48 @@ def update_report(report_dict: dict, model_name: str, model_config: str,
metric_info (list): Metric info list of the ${modelName}.yml.
test_pass (str): Test result: Pass or Fail.
report_txt_path (Path): Report txt path.
codebase_name (str): Codebase name.
"""
if '.pth' not in model_checkpoint_name:
# make model path shorter by cutting the work_dir_root
work_dir_root = report_txt_path.parent.absolute().resolve()
if ' ' not in model_checkpoint_name:
model_checkpoint_name = \
Path(model_checkpoint_name).absolute().resolve()
model_checkpoint_name = \
str(model_checkpoint_name).replace(str(work_dir_root),
'${WORK_DIR}')
# make model path shorter
if '.pth' in checkpoint:
checkpoint = Path(checkpoint).absolute().resolve()
checkpoint = str(checkpoint).split(f'/{codebase_name}/')[-1]
checkpoint = '${CHECKPOINT_DIR}' + f'/{codebase_name}/{checkpoint}'
else:
if Path(checkpoint).exists():
# To invoice the path which is 'A.a B.b' when test sdk.
checkpoint = Path(checkpoint).absolute().resolve()
elif backend_name == 'ncnn':
# ncnn have 2 backend file but only need xxx.param
checkpoint = checkpoint.split('.param')[0] + '.param'
work_dir = report_txt_path.parent.absolute().resolve()
checkpoint = str(checkpoint).replace(str(work_dir), '${WORK_DIR}')
# save to tmp file
tmp_str = f'{model_name},{model_config},{task_name},' \
f'{model_checkpoint_name},{dataset},{backend_name},' \
f'{deploy_config},{static_or_dynamic},{precision_type},' \
f'{conversion_result},{fps},'
tmp_str = f'{model_name},{model_config},{task_name},{checkpoint},' \
f'{dataset},{backend_name},{deploy_config},' \
f'{static_or_dynamic},{precision_type},{conversion_result},' \
f'{fps},'
# save to report
report_dict.get('model_name').append(model_name)
report_dict.get('model_config').append(model_config)
report_dict.get('task_name').append(task_name)
report_dict.get('model_checkpoint_name').append(model_checkpoint_name)
report_dict.get('dataset').append(dataset)
report_dict.get('backend_name').append(backend_name)
report_dict.get('deploy_config').append(deploy_config)
report_dict.get('static_or_dynamic').append(static_or_dynamic)
report_dict.get('precision_type').append(precision_type)
report_dict.get('conversion_result').append(conversion_result)
report_dict.get('fps').append(fps)
report_dict.get('Model').append(model_name)
report_dict.get('Model Config').append(model_config)
report_dict.get('Task').append(task_name)
report_dict.get('Checkpoint').append(checkpoint)
report_dict.get('Dataset').append(dataset)
report_dict.get('Backend').append(backend_name)
report_dict.get('Deploy Config').append(deploy_config)
report_dict.get('Static or Dynamic').append(static_or_dynamic)
report_dict.get('Precision Type').append(precision_type)
report_dict.get('Conversion Result').append(conversion_result)
# report_dict.get('FPS').append(fps)
for metric in metric_info:
for metric_name, metric_value in metric.items():
metric_name = str(metric_name)
report_dict.get(metric_name).append(metric_value)
tmp_str += f'{metric_value},'
report_dict.get('test_pass').append(test_pass)
report_dict.get('Test Pass').append(test_pass)
tmp_str += f'{test_pass}\n'
@ -259,7 +263,8 @@ def update_report(report_dict: dict, model_name: str, model_config: str,
def get_pytorch_result(model_name: str, meta_info: dict, checkpoint_path: Path,
model_config_path: Path, model_config_name: str,
test_yaml_metric_info: dict, report_dict: dict,
logger: logging.Logger, report_txt_path: Path):
logger: logging.Logger, report_txt_path: Path,
codebase_name: str):
"""Get metric from metafile info of the model.
Args:
@ -272,6 +277,7 @@ def get_pytorch_result(model_name: str, meta_info: dict, checkpoint_path: Path,
report_dict (dict): Report info dict.
logger (logging.Logger): Logger.
report_txt_path (Path): Report txt path.
codebase_name (str): Codebase name.
Returns:
Dict: metric info of the model
@ -296,10 +302,14 @@ def get_pytorch_result(model_name: str, meta_info: dict, checkpoint_path: Path,
for _, v in test_yaml_metric_info.items():
if v.get('dataset') is None:
continue
dataset_tmp = using_dataset.get(v.get('dataset'), [])
if v.get('task_name') not in dataset_tmp:
dataset_tmp.append(v.get('task_name'))
using_dataset.update({v.get('dataset'): dataset_tmp})
dataset_list = v.get('dataset', [])
if not isinstance(dataset_list, list):
dataset_list = [dataset_list]
for metric_dataset in dataset_list:
dataset_tmp = using_dataset.get(metric_dataset, [])
if v.get('task_name') not in dataset_tmp:
dataset_tmp.append(v.get('task_name'))
using_dataset.update({metric_dataset: dataset_tmp})
# Get metrics info from metafile
for metafile_metric in metafile_metric_info:
@ -317,10 +327,10 @@ def get_pytorch_result(model_name: str, meta_info: dict, checkpoint_path: Path,
continue
dataset_type += f'{dataset} | '
if task_name not in using_dataset.get(dataset):
if task_name not in using_dataset.get(dataset, []):
# only add the metric with the correct dataset
logger.info(f'task_name ({task_name}) is not in'
f'{using_dataset.get(dataset)}, skip it...')
f'{using_dataset.get(dataset, [])}, skip it...')
continue
task_type += f'{task_name} | '
@ -374,7 +384,7 @@ def get_pytorch_result(model_name: str, meta_info: dict, checkpoint_path: Path,
model_name=model_name,
model_config=str(model_config_path),
task_name=task_type,
model_checkpoint_name=str(checkpoint_path),
checkpoint=str(checkpoint_path),
dataset=dataset_type,
backend_name='Pytorch',
deploy_config='-',
@ -384,7 +394,8 @@ def get_pytorch_result(model_name: str, meta_info: dict, checkpoint_path: Path,
fps=fps,
metric_info=metric_list,
test_pass='-',
report_txt_path=report_txt_path)
report_txt_path=report_txt_path,
codebase_name=codebase_name)
logger.info(f'Got {model_config_path} metric: {pytorch_metric}')
return pytorch_metric, dataset_type
@ -554,7 +565,7 @@ def get_fps_metric(shell_res: int, pytorch_metric: dict, metric_key: str,
else:
# Got fps from log file
fps = get_info_from_log_file('FPS', log_path, metric_key, logger)
logger.info(f'Got fps = {fps}')
# logger.info(f'Got fps = {fps}')
# Got metric from log file
metric_value = get_info_from_log_file('metric', log_path, metric_key,
@ -633,14 +644,13 @@ def get_backend_fps_metric(deploy_cfg_path: str, model_cfg_path: Path,
report_txt_path (Path): report txt save path.
model_name (str): Name of model in test yaml.
"""
cmd_str = f'cd {str(Path(__file__).absolute().parent.parent)} && ' \
'python3 tools/test.py ' \
cmd_str = 'python3 tools/test.py ' \
f'{deploy_cfg_path} ' \
f'{str(model_cfg_path.absolute())} ' \
f'--model "{convert_checkpoint_path}" ' \
f'--device {device_type} ' \
f'--log2file "{log_path}" ' \
f'--speed-test '
f'--speed-test ' \
f'--device {device_type} '
codebase_name = get_codebase(str(deploy_cfg_path)).value
if codebase_name != 'mmedit':
@ -650,7 +660,9 @@ def get_backend_fps_metric(deploy_cfg_path: str, model_cfg_path: Path,
logger.info(f'Process cmd = {cmd_str}')
# Test backend
shell_res = os.system(cmd_str)
shell_res = subprocess.run(
cmd_str, cwd=str(Path(__file__).absolute().parent.parent),
shell=True).returncode
logger.info(f'Got shell_res = {shell_res}')
metric_key = ''
@ -687,7 +699,7 @@ def get_backend_fps_metric(deploy_cfg_path: str, model_cfg_path: Path,
model_name=model_name,
model_config=str(model_cfg_path),
task_name=task_name,
model_checkpoint_name=convert_checkpoint_path,
checkpoint=convert_checkpoint_path,
dataset=dataset_type,
backend_name=backend_name,
deploy_config=str(deploy_cfg_path),
@ -697,7 +709,8 @@ def get_backend_fps_metric(deploy_cfg_path: str, model_cfg_path: Path,
fps=fps,
metric_info=metric_list,
test_pass=str(test_pass),
report_txt_path=report_txt_path)
report_txt_path=report_txt_path,
codebase_name=codebase_name)
def get_precision_type(deploy_cfg_name: str):
@ -825,6 +838,17 @@ def get_backend_result(pipeline_info: dict, model_cfg_path: Path,
deploy_cfg_path = Path(pipeline_info.get('deploy_config'))
backend_name = str(get_backend(str(deploy_cfg_path)).name).lower()
# change device_type for special case
if backend_name in ['ncnn', 'openvino']:
device_type = 'cpu'
elif backend_name == 'onnxruntime' and device_type != 'cpu':
import onnxruntime as ort
if ort.get_device() != 'GPU':
device_type = 'cpu'
logger.warning('Device type is forced to cpu '
'since onnxruntime-gpu is not installed')
infer_type = \
'dynamic' if is_dynamic_shape(str(deploy_cfg_path)) else 'static'
@ -836,12 +860,12 @@ def get_backend_result(pipeline_info: dict, model_cfg_path: Path,
Path(checkpoint_path).parent.name,
backend_name,
infer_type,
precision_type,
Path(checkpoint_path).stem)
backend_output_path.mkdir(parents=True, exist_ok=True)
# convert cmd string
cmd_str = f'cd {str(str(Path(__file__).absolute().parent.parent))} && ' \
'python3 ./tools/deploy.py ' \
cmd_str = 'python3 ./tools/deploy.py ' \
f'{str(deploy_cfg_path.absolute().resolve())} ' \
f'{str(model_cfg_path.absolute().resolve())} ' \
f'"{str(checkpoint_path.absolute().resolve())}" ' \
@ -863,15 +887,32 @@ def get_backend_result(pipeline_info: dict, model_cfg_path: Path,
logger.info(f'Process cmd = {cmd_str}')
# Convert the model to specific backend
shell_res = os.system(cmd_str)
logger.info(f'Got shell_res = {shell_res}')
convert_result = False
convert_log_path = backend_output_path.joinpath('convert_log.log')
logger.info(f'Logging conversion log to {convert_log_path} ...')
file_handler = open(convert_log_path, 'w', encoding='utf-8')
try:
# Convert the model to specific backend
process_res = subprocess.Popen(
cmd_str,
cwd=str(Path(__file__).absolute().parent.parent),
shell=True,
stdout=file_handler,
stderr=file_handler)
process_res.wait()
logger.info(f'Got shell_res = {process_res.returncode}')
# check if converted successes or not.
if process_res.returncode == 0:
convert_result = True
else:
convert_result = False
except Exception as e:
print(f'process convert error: {e}')
finally:
file_handler.close()
# check if converted successes or not.
if shell_res == 0:
convert_result = True
else:
convert_result = False
logger.info(f'Got convert_result = {convert_result}')
if isinstance(backend_file_name, list):
@ -993,7 +1034,7 @@ def get_backend_result(pipeline_info: dict, model_cfg_path: Path,
model_name=model_name,
model_config=str(model_cfg_path),
task_name=task_name,
model_checkpoint_name=report_checkpoint,
checkpoint=report_checkpoint,
dataset=dataset_type,
backend_name=backend_name,
deploy_config=str(deploy_cfg_path),
@ -1003,7 +1044,8 @@ def get_backend_result(pipeline_info: dict, model_cfg_path: Path,
fps=fps,
metric_info=metric_list,
test_pass=str(test_pass),
report_txt_path=report_txt_path)
report_txt_path=report_txt_path,
codebase_name=codebase_name)
def save_report(report_info: dict, report_save_path: Path,
@ -1045,7 +1087,7 @@ def main():
}
backend_list = args.backends
if backend_list == ['all']:
if backend_list is None:
backend_list = [
'onnxruntime', 'tensorrt', 'openvino', 'ncnn', 'pplnn',
'torchscript'
@ -1053,10 +1095,19 @@ def main():
assert isinstance(backend_list, list)
logger.info(f'Regression test backend list = {backend_list}')
if args.models is None:
logger.info('Regression test for all models in test yaml.')
else:
logger.info(f'Regression test models list = {args.models}')
work_dir = Path(args.work_dir)
work_dir.mkdir(parents=True, exist_ok=True)
for deploy_yaml in args.deploy_yml:
deploy_yaml_list = [
f'./tests/regression/{codebase}.yml' for codebase in args.codebase
]
for deploy_yaml in deploy_yaml_list:
if not Path(deploy_yaml).exists():
raise FileNotFoundError(f'deploy_yaml {deploy_yaml} not found, '
@ -1070,24 +1121,28 @@ def main():
report_txt_path = report_save_path.with_suffix('.txt')
report_dict = {
'model_name': [],
'model_config': [],
'task_name': [],
'model_checkpoint_name': [],
'dataset': [],
'backend_name': [],
'deploy_config': [],
'static_or_dynamic': [],
'precision_type': [],
'conversion_result': [],
'fps': []
'Model': [],
'Model Config': [],
'Task': [],
'Checkpoint': [],
'Dataset': [],
'Backend': [],
'Deploy Config': [],
'Static or Dynamic': [],
'Precision Type': [],
'Conversion Result': [],
# 'FPS': []
}
global_info = yaml_info.get('globals')
for metric_name in global_info.get('metric_info', {}):
report_dict.update({metric_name: []})
metric_info = global_info.get('metric_info', {})
report_dict.update({'test_pass': []})
for metric_name in metric_info:
report_dict.update({metric_name: []})
report_dict.update({'Test Pass': []})
global_info.update({'checkpoint_dir': args.checkpoint_dir})
global_info.update(
{'codebase_name': Path(deploy_yaml).stem.split('_')[0]})
with open(report_txt_path, 'w') as f_report:
title_str = ''
@ -1103,6 +1158,11 @@ def main():
f'skipping {models.get("name")}...')
continue
model_name = models.get('name')
if args.models is not None and model_name not in args.models:
logger.info(f'Test specific model mode, skip {model_name}...')
continue
model_metafile_info, checkpoint_save_dir, codebase_dir = \
get_model_metafile_info(global_info, models, logger)
for model_config in model_metafile_info:
@ -1126,9 +1186,9 @@ def main():
# Get pytorch from metafile.yml
pytorch_metric, metafile_dataset = get_pytorch_result(
models.get('name'), model_metafile_info, checkpoint_path,
model_name, model_metafile_info, checkpoint_path,
model_cfg_path, model_config, metric_info, report_dict,
logger, report_txt_path)
logger, report_txt_path, global_info.get('codebase_name'))
for pipeline in pipelines_info:
deploy_config = pipeline.get('deploy_config')
@ -1150,13 +1210,17 @@ def main():
pytorch_metric, metric_info,
report_dict, test_type, logger,
backend_file_name, report_txt_path,
metafile_dataset, models.get('name'))
save_report(report_dict, report_save_path, logger)
metafile_dataset, model_name)
if len(report_dict.get('Model')) > 0:
save_report(report_dict, report_save_path, logger)
else:
logger.info(f'No model for {deploy_yaml}, not saving report.')
# merge report
merge_report(str(work_dir), logger)
logger.info('All done.')
if __name__ == '__main__':
main()