15 KiB
Deployment
We provide deployment tools under tools/deployment
directory.
Convert to ONNX (experimental)
We provide a script to convert model to ONNX format. The converted model could be visualized by tools like Netron. Besides, we also support comparing the output results between Pytorch and ONNX model.
python tools/deployment/pytorch2onnx.py
${MODEL_CONFIG_PATH} \
${MODEL_CKPT_PATH} \
${MODEL_TYPE} \
${IMAGE_PATH} \
--output-file ${OUTPUT_FILE} \
--device-id ${DEVICE_ID} \
--opset-version ${OPSET_VERSION} \
--verify \
--verbose \
--show \
--dynamic-export
Description of arguments:
model_config
: The path of a model config file.model_ckpt
: The path of a model checkpoint file.model_type
: The model type of the config file, options:recog
,det
.image_path
: The path to input image file.--output-file
: The path of output ONNX model. If not specified, it will be set totmp.onnx
.--device-id
: Which gpu to use. If not specified, it will be set to 0.--opset-version
: ONNX opset version, default to 11.--verify
: Determines whether to verify the correctness of an exported model. If not specified, it will be set toFalse
.--verbose
: Determines whether to print the architecture of the exported model. If not specified, it will be set toFalse
.--show
: Determines whether to visualize outputs of ONNXRuntime and pytorch. If not specified, it will be set toFalse
.--dynamic-export
: Determines whether to export ONNX model with dynamic input and output shapes. If not specified, it will be set toFalse
.
Note: This tool is still experimental. Some customized operators are not supported for now. And we only support detection
and recognition
for now.
List of supported models exportable to ONNX
The table below lists the models that are guaranteed to be exportable to ONNX and runnable in ONNX Runtime.
Model | Config | Dynamic Shape | Batch Inference | Note |
---|---|---|---|---|
DBNet | dbnet_r18_fpnc_1200e_icdar2015.py | Y | N | |
PSENet | psenet_r50_fpnf_600e_ctw1500.py | Y | Y | |
PSENet | psenet_r50_fpnf_600e_icdar2015.py | Y | Y | |
PANet | panet_r18_fpem_ffm_600e_ctw1500.py | Y | Y | |
PANet | panet_r18_fpem_ffm_600e_icdar2015.py | Y | Y | |
CRNN | crnn_academic_dataset.py | Y | Y | CRNN only accepts input with height 32 |
Notes:
- All models above are tested with Pytorch==1.8.1 and onnxruntime==1.7.0
- If you meet any problem with the listed models above, please create an issue and it would be taken care of soon. For models not included in the list, please try to solve them by yourself.
- Because this feature is experimental and may change fast, please always try with the latest
mmcv
andmmocr
.
Convert ONNX to TensorRT (experimental)
We also provide a script to convert ONNX model to TensorRT format. Besides, we support comparing the output results between ONNX and TensorRT model.
python tools/deployment/onnx2tensorrt.py
${MODEL_CONFIG_PATH} \
${MODEL_TYPE} \
${IMAGE_PATH} \
${ONNX_FILE} \
--trt-file ${OUT_TENSORRT} \
--max-shape INT INT INT INT \
--min-shape INT INT INT INT \
--workspace-size INT \
--fp16 \
--verify \
--show \
--verbose
Description of arguments:
model_config
: The path of a model config file.model_type
:The model type of the config file, options:image_path
: The path to input image file.onnx_file
: The path to input ONNX file.--trt-file
: The path of output TensorRT model. If not specified, it will be set totmp.trt
.--max-shape
: Maximum shape of model input.--min-shape
: Minimum shape of model input.--workspace-size
: Max workspace size in GiB. If not specified, it will be set to 1 GiB.--fp16
: Determines whether to export TensorRT with fp16 mode. If not specified, it will be set toFalse
.--verify
: Determines whether to verify the correctness of an exported model. If not specified, it will be set toFalse
.--show
: Determines whether to show the output of ONNX and TensorRT. If not specified, it will be set toFalse
.--verbose
: Determines whether to verbose logging messages while creating TensorRT engine. If not specified, it will be set toFalse
.
Note: This tool is still experimental. Some customized operators are not supported for now. We only support detection
and recognition
for now.
List of supported models exportable to TensorRT
The table below lists the models that are guaranteed to be exportable to TensorRT engine and runnable in TensorRT.
Model | Config | Dynamic Shape | Batch Inference | Note |
---|---|---|---|---|
DBNet | dbnet_r18_fpnc_1200e_icdar2015.py | Y | N | |
PSENet | psenet_r50_fpnf_600e_ctw1500.py | Y | Y | |
PSENet | psenet_r50_fpnf_600e_icdar2015.py | Y | Y | |
PANet | panet_r18_fpem_ffm_600e_ctw1500.py | Y | Y | |
PANet | panet_r18_fpem_ffm_600e_icdar2015.py | Y | Y | |
CRNN | crnn_academic_dataset.py | Y | Y | CRNN only accepts input with height 32 |
Notes:
- All models above are tested with Pytorch==1.8.1, onnxruntime==1.7.0 and tensorrt==7.2.1.6
- If you meet any problem with the listed models above, please create an issue and it would be taken care of soon. For models not included in the list, please try to solve them by yourself.
- Because this feature is experimental and may change fast, please always try with the latest
mmcv
andmmocr
.
Evaluate ONNX and TensorRT Models (experimental)
We provide methods to evaluate TensorRT and ONNX models in tools/deployment/deploy_test.py
.
Prerequisite
To evaluate ONNX and TensorRT models, onnx, onnxruntime and TensorRT should be installed first. Install mmcv-full
with ONNXRuntime custom ops and TensorRT plugins follow ONNXRuntime in mmcv and TensorRT plugin in mmcv.
Usage
python tools/deploy_test.py \
${CONFIG_FILE} \
${MODEL_PATH} \
${MODEL_TYPE} \
${BACKEND} \
--eval ${METRICS} \
--device ${DEVICE}
Description of all arguments
model_config
: The path of a model config file.model_file
: The path of a TensorRT or an ONNX model file.model_type
: Detection or recognition model to deploy. Chooserecog
ordet
.backend
: The backend for testing, choose TensorRT or ONNXRuntime.--eval
: The evaluation metrics.acc
for recognition models,hmean-iou
for detection models.--device
: Device for evaluation,cuda:0
as default.
Results and Models
Model | Config | Dataset | Metric | PyTorch | ONNX Runtime | TensorRT FP32 | TensorRT FP16 |
---|---|---|---|---|---|---|---|
DBNet | dbnet_r18_fpnc_1200e_icdar2015.py |
icdar2015 | Recall |
0.731 | 0.731 | 0.678 | 0.679 |
Precision | 0.871 | 0.871 | 0.844 | 0.842 | |||
Hmean | 0.795 | 0.795 | 0.752 | 0.752 | |||
DBNet* | dbnet_r18_fpnc_1200e_icdar2015.py |
icdar2015 | Recall |
0.720 | 0.720 | 0.720 | 0.718 |
Precision | 0.868 | 0.868 | 0.868 | 0.868 | |||
Hmean | 0.787 | 0.787 | 0.787 | 0.786 | |||
PSENet | psenet_r50_fpnf_600e_icdar2015.py |
icdar2015 | Recall |
0.753 | 0.753 | 0.753 | 0.752 |
Precision | 0.867 | 0.867 | 0.867 | 0.867 | |||
Hmean | 0.806 | 0.806 | 0.806 | 0.805 | |||
PANet | panet_r18_fpem_ffm_600e_icdar2015.py |
icdar2015 | Recall |
0.740 | 0.740 | 0.687 | N/A |
Precision | 0.860 | 0.860 | 0.815 | N/A | |||
Hmean | 0.796 | 0.796 | 0.746 | N/A | |||
PANet* | panet_r18_fpem_ffm_600e_icdar2015.py |
icdar2015 | Recall |
0.736 | 0.736 | 0.736 | N/A |
Precision | 0.857 | 0.857 | 0.857 | N/A | |||
Hmean | 0.792 | 0.792 | 0.792 | N/A | |||
CRNN | crnn_academic_dataset.py |
IIIT5K | Acc | 0.806 | 0.806 | 0.806 | 0.806 |
Notes:
- TensorRT upsampling operation is a little different from pytorch. For DBNet and PANet, we suggest replacing upsampling operations with neast mode to operations with bilinear mode. Here for PANet, here and here for DBNet. As is shown in the above table, networks with tag * means the upsampling mode is changed.
- Note that, changing upsampling mode reduces less performance compared with using nearst mode. However, the weights of networks are trained through nearst mode. To persue best performance, using bilinear mode for both training and TensorRT deployment is recommanded.
- All ONNX and TensorRT models are evaluated with dynamic shape on the datasets and images are preprocessed according to the original config file.
- This tool is still experimental, and we only support
detection
andrecognition
for now.