Add RKNN support. (#865)

* save codes

* support resnet and yolov3

* support yolox

* fix lint

* add mmseg support and a doc

* add UT

* update supported model list

* fix ci

* refine docstring

* resolve comments

* remote output_tensor_type

* resolve comments

* update readme
This commit is contained in:
AllentDan 2022-09-06 11:48:39 +08:00 committed by GitHub
parent 6b01a2e649
commit 124635ec5f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
32 changed files with 913 additions and 169 deletions

View File

@ -55,9 +55,9 @@ The currently supported codebases and models are as follows, and more will be in
Models can be exported and run in the following backends, and more will be compatible Models can be exported and run in the following backends, and more will be compatible
| ONNX Runtime | TensorRT | ppl.nn | ncnn | OpenVINO | LibTorch | snpe | Ascend | Core ML | more | | ONNX Runtime | TensorRT | ppl.nn | ncnn | OpenVINO | LibTorch | snpe | Ascend | Core ML | RKNN | more |
| ------------ | -------- | ------ | ---- | -------- | -------- | ---- | ------ | ------- | ---------------------------------------------- | | ------------ | -------- | ------ | ---- | -------- | -------- | ---- | ------ | ------- | ---- | ---------------------------------------------- |
| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | [benchmark](docs/en/03-benchmark/benchmark.md) | | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | [benchmark](docs/en/03-benchmark/benchmark.md) |
### Efficient and scalable C/C++ SDK Framework ### Efficient and scalable C/C++ SDK Framework

View File

@ -53,9 +53,9 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
### 支持多种推理后端 ### 支持多种推理后端
| ONNX Runtime | TensorRT | ppl.nn | ncnn | OpenVINO | LibTorch | snpe | Ascend | Core ML | more | | ONNX Runtime | TensorRT | ppl.nn | ncnn | OpenVINO | LibTorch | snpe | Ascend | Core ML | RKNN | more |
| ------------ | -------- | ------ | ---- | -------- | -------- | ---- | ------ | ------- | ---------------------------------------------- | | ------------ | -------- | ------ | ---- | -------- | -------- | ---- | ------ | ------- | ---- | ---------------------------------------------- |
| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | [benchmark](docs/en/03-benchmark/benchmark.md) | | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | [benchmark](docs/en/03-benchmark/benchmark.md) |
### SDK 可高度定制化 ### SDK 可高度定制化

View File

@ -0,0 +1,8 @@
backend_config = dict(
type='rknn',
common_config=dict(
mean_values=None,
std_values=None,
target_platform='rk3588',
optimization_level=3),
quantization_config=dict(do_quantization=False, dataset=None))

View File

@ -0,0 +1,5 @@
_base_ = ['./classification_static.py', '../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[224, 224])
codebase_config = dict(model_type='rknn')
backend_config = dict(input_size_list=[[3, 224, 224]])

View File

@ -0,0 +1,17 @@
_base_ = ['../_base_/base_static.py', '../../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[640, 640])
codebase_config = dict(model_type='rknn')
backend_config = dict(input_size_list=[[3, 640, 640]])
partition_config = dict(
type='rknn', # the partition policy name
apply_marks=True, # should always be set to True
partition_cfg=[
dict(
save_file='model.onnx', # name to save the partitioned onnx model
start=['detector_forward:input'], # [mark_name:input/output, ...]
end=['yolo_head:input']) # [mark_name:input/output, ...]
])

View File

@ -0,0 +1,7 @@
_base_ = ['./segmentation_static.py', '../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[512, 512])
codebase_config = dict(model_type='rknn')
backend_config = dict(input_size_list=[[3, 512, 512]])

View File

@ -2,82 +2,82 @@
The table below lists the models that are guaranteed to be exportable to other backends. The table below lists the models that are guaranteed to be exportable to other backends.
| Model | Codebase | TorchScript | OnnxRuntime | TensorRT | ncnn | PPLNN | OpenVINO | Ascend | Model config | | Model | Codebase | TorchScript | OnnxRuntime | TensorRT | ncnn | PPLNN | OpenVINO | Ascend | RKNN | Model config |
| :-------------------------- | :--------------- | :---------: | :---------: | :------: | :--: | :---: | :------: | :----: | :---------------------------------------------------------------------------------------------: | | :-------------------------- | :--------------- | :---------: | :---------: | :------: | :--: | :---: | :------: | :----: | :--: | :---------------------------------------------------------------------------------------------: |
| RetinaNet | MMDetection | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) | | RetinaNet | MMDetection | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) |
| Faster R-CNN | MMDetection | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn) | | Faster R-CNN | MMDetection | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn) |
| YOLOv3 | MMDetection | Y | Y | Y | Y | N | Y | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolo) | | YOLOv3 | MMDetection | Y | Y | Y | Y | N | Y | Y | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolo) |
| YOLOX | MMDetection | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolox) | | YOLOX | MMDetection | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolox) |
| FCOS | MMDetection | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos) | | FCOS | MMDetection | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos) |
| FSAF | MMDetection | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf) | | FSAF | MMDetection | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf) |
| Mask R-CNN | MMDetection | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask_rcnn) | | Mask R-CNN | MMDetection | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask_rcnn) |
| SSD[\*](#note) | MMDetection | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd) | | SSD[\*](#note) | MMDetection | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd) |
| FoveaBox | MMDetection | Y | Y | N | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox) | | FoveaBox | MMDetection | Y | Y | N | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox) |
| ATSS | MMDetection | N | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/atss) | | ATSS | MMDetection | N | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/atss) |
| GFL | MMDetection | N | Y | Y | N | ? | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gfl) | | GFL | MMDetection | N | Y | Y | N | ? | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gfl) |
| Cascade R-CNN | MMDetection | N | Y | Y | N | Y | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cascade_rcnn) | | Cascade R-CNN | MMDetection | N | Y | Y | N | Y | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cascade_rcnn) |
| Cascade Mask R-CNN | MMDetection | N | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cascade_rcnn) | | Cascade Mask R-CNN | MMDetection | N | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cascade_rcnn) |
| Swin Transformer[\*](#note) | MMDetection | N | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/swin) | | Swin Transformer[\*](#note) | MMDetection | N | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/swin) |
| VFNet | MMDetection | N | N | N | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/vfnet) | | VFNet | MMDetection | N | N | N | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/vfnet) |
| RepPoints | MMDetection | N | N | Y | N | ? | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints) | | RepPoints | MMDetection | N | N | Y | N | ? | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints) |
| DETR | MMDetection | N | Y | Y | N | ? | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detr) | | DETR | MMDetection | N | Y | Y | N | ? | N | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detr) |
| ResNet | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnet) | | ResNet | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnet) |
| ResNeXt | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnext) | | ResNeXt | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnext) |
| SE-ResNet | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/seresnet) | | SE-ResNet | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/seresnet) |
| MobileNetV2 | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/mobilenet_v2) | | MobileNetV2 | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/mobilenet_v2) |
| ShuffleNetV1 | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v1) | | ShuffleNetV1 | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v1) |
| ShuffleNetV2 | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v2) | | ShuffleNetV2 | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v2) |
| VisionTransformer | MMClassification | Y | Y | Y | Y | ? | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/vision_transformer) | | VisionTransformer | MMClassification | Y | Y | Y | Y | ? | Y | Y | N | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/vision_transformer) |
| SwinTransformer | MMClassification | Y | Y | Y | N | ? | N | N | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/swin_transformer) | | SwinTransformer | MMClassification | Y | Y | Y | N | ? | N | ? | N | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/swin_transformer) |
| FCN | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fcn) | | FCN | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fcn) |
| PSPNet[\*static](#note) | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/pspnet) | | PSPNet[\*static](#note) | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/pspnet) |
| DeepLabV3 | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/deeplabv3) | | DeepLabV3 | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/deeplabv3) |
| DeepLabV3+ | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/deeplabv3plus) | | DeepLabV3+ | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/deeplabv3plus) |
| Fast-SCNN[\*static](#note) | MMSegmentation | Y | Y | Y | N | Y | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fastscnn) | | Fast-SCNN[\*static](#note) | MMSegmentation | Y | Y | Y | N | Y | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fastscnn) |
| UNet | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/unet) | | UNet | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/unet) |
| ANN[\*](#note) | MMSegmentation | Y | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ann) | | ANN[\*](#note) | MMSegmentation | Y | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ann) |
| APCNet | MMSegmentation | Y | Y | Y | Y | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/apcnet) | | APCNet | MMSegmentation | Y | Y | Y | Y | N | N | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/apcnet) |
| BiSeNetV1 | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/bisenetv1) | | BiSeNetV1 | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/bisenetv1) |
| BiSeNetV2 | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/bisenetv2) | | BiSeNetV2 | MMSegmentation | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/bisenetv2) |
| CGNet | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/cgnet) | | CGNet | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/cgnet) |
| DMNet | MMSegmentation | ? | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/dmnet) | | DMNet | MMSegmentation | ? | Y | N | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/dmnet) |
| DNLNet | MMSegmentation | ? | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/dnlnet) | | DNLNet | MMSegmentation | ? | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/dnlnet) |
| EMANet | MMSegmentation | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/emanet) | | EMANet | MMSegmentation | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/emanet) |
| EncNet | MMSegmentation | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/encnet) | | EncNet | MMSegmentation | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/encnet) |
| ERFNet | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/erfnet) | | ERFNet | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/erfnet) |
| FastFCN | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fastfcn) | | FastFCN | MMSegmentation | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fastfcn) |
| GCNet | MMSegmentation | Y | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/gcnet) | | GCNet | MMSegmentation | Y | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/gcnet) |
| ICNet[\*](#note) | MMSegmentation | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/icnet) | | ICNet[\*](#note) | MMSegmentation | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/icnet) |
| ISANet | MMSegmentation | ? | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/isanet) | | ISANet[\*static](#note) | MMSegmentation | N | Y | Y | N | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/isanet) |
| NonLocal Net | MMSegmentation | ? | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/nonlocal_net) | | NonLocal Net | MMSegmentation | ? | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/nonlocal_net) |
| OCRNet | MMSegmentation | ? | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ocrnet) | | OCRNet | MMSegmentation | ? | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ocrnet) |
| PointRend | MMSegmentation | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/point_rend) | | PointRend | MMSegmentation | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/point_rend) |
| Semantic FPN | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/sem_fpn) | | Semantic FPN | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/sem_fpn) |
| STDC | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/stdc) | | STDC | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/stdc) |
| UPerNet[\*](#note) | MMSegmentation | ? | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/upernet) | | UPerNet[\*](#note) | MMSegmentation | ? | Y | Y | N | N | N | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/upernet) |
| DANet | MMSegmentation | ? | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/danet) | | DANet | MMSegmentation | ? | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/danet) |
| Segmenter | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/segmenter) | | Segmenter[\*static](#note) | MMSegmentation | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/segmenter) |
| SRCNN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srcnn) | | SRCNN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srcnn) |
| ESRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/esrgan) | | ESRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/esrgan) |
| SRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srresnet_srgan) | | SRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srresnet_srgan) |
| SRResNet | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srresnet_srgan) | | SRResNet | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srresnet_srgan) |
| Real-ESRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/real_esrgan) | | Real-ESRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/real_esrgan) |
| EDSR | MMEditing | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/edsr) | | EDSR | MMEditing | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/edsr) |
| RDN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/rdn) | | RDN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/rdn) |
| DBNet | MMOCR | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/dbnet) | | DBNet | MMOCR | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/dbnet) |
| PANet | MMOCR | Y | Y | Y | Y | ? | Y | Y | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/panet) | | PANet | MMOCR | Y | Y | Y | Y | ? | Y | Y | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/panet) |
| PSENet | MMOCR | Y | Y | Y | Y | ? | Y | Y | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/psenet) | | PSENet | MMOCR | Y | Y | Y | Y | ? | Y | Y | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/psenet) |
| CRNN | MMOCR | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/crnn) | | CRNN | MMOCR | Y | Y | Y | Y | Y | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/crnn) |
| SAR[\*](#note) | MMOCR | N | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/sar) | | SAR | MMOCR | N | Y | N | N | N | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/sar) |
| SATRN | MMOCR | Y | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/satrn) | | SATRN | MMOCR | Y | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/satrn) |
| HRNet | MMPose | N | Y | Y | Y | N | Y | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#hrnet-cvpr-2019) | | HRNet | MMPose | N | Y | Y | Y | N | Y | N | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#hrnet-cvpr-2019) |
| MSPN | MMPose | N | Y | Y | Y | N | Y | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#mspn-arxiv-2019) | | MSPN | MMPose | N | Y | Y | Y | N | Y | N | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#mspn-arxiv-2019) |
| LiteHRNet | MMPose | N | Y | Y | N | N | Y | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#litehrnet-cvpr-2021) | | LiteHRNet | MMPose | N | Y | Y | N | N | Y | N | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#litehrnet-cvpr-2021) |
| PointPillars | MMDetection3d | ? | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars) | | PointPillars | MMDetection3d | ? | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars) |
| CenterPoint (pillar) | MMDetection3d | ? | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/centerpoint) | | CenterPoint (pillar) | MMDetection3d | ? | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/centerpoint) |
| RotatedRetinaNet | RotatedDetection | N | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/rotated_retinanet/README.md) | | RotatedRetinaNet | RotatedDetection | N | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/rotated_retinanet/README.md) |
| Oriented RCNN | RotatedDetection | N | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/oriented_rcnn/README.md) | | Oriented RCNN | RotatedDetection | N | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/oriented_rcnn/README.md) |
| Gliding Vertex | RotatedDetection | N | N | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/gliding_vertex/README.md) | | Gliding Vertex | RotatedDetection | N | N | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/gliding_vertex/README.md) |
### Note ### Note

View File

@ -0,0 +1,80 @@
# RKNN support
This tutorial is based on Linux systems like Ubuntu-18.04 and Rockchip NPU like `rk3588`.
## Installation
It is recommended to create a virtual environment for the project.
1. get RKNN-Toolkit2 through:
```
git clone https://github.com/rockchip-linux/rknn-toolkit2
```
2. install RKNN python package following [official doc](https://github.com/rockchip-linux/rknn-toolkit2/tree/master/doc). In our testing, we used the rknn-toolkit 1.2.0 with commit id `834ba0b0a1ab8ee27024443d77b02b5ba48b67fc`.
3. reinstall MMDeploy from source following the [instructions](../01-how-to-build/build_from_source.md). Note that there are conflicts between the pip dependencies of MMDeploy and RKNN. Here is the suggested packages versions for python 3.6:
```
protobuf==3.19.4
onnx==1.8.0
onnxruntime==1.8.0
torch==1.8.0
torchvision==0.9.0
```
To work with models from [MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/get_started.md), you may need to install it additionally.
## Usage
Example:
```bash
python tools/deploy.py \
configs/mmdet/detection/detection_rknn_static.py \
/mmdetection_dir/mmdetection/configs/yolo/yolov3_d53_mstrain-608_273e_coco.py \
/tmp/snapshots/yolov3_d53_mstrain-608_273e_coco_20210518_115020-a2c3acb8.pth \
tests/data/tiger.jpeg \
--work-dir ../deploy_result \
--device cpu
```
## Deployment config
With the deployment config, you can modify the `backend_config` for your preference. An example `backend_config` of mmclassification is shown as below:
```python
backend_config = dict(
type='rknn',
common_config=dict(
mean_values=None,
std_values=None,
target_platform='rk3588',
optimization_level=3),
quantization_config=dict(do_quantization=False, dataset=None),
input_size_list=[[3, 224, 224]])
```
The contents of `common_config` are for `rknn.config()`. The contents of `quantization_config` are used to control `rknn.build()`.
## Troubleshooting
- Quantization fails.
Empirically, RKNN require the inputs not normalized if `do_quantization` is set to `False`. Please modify the settings of `Normalize` in the `model_cfg` from
```python
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
```
to
```python
img_norm_cfg = dict(
mean=[0, 0, 0], std=[1, 1, 1], to_rgb=True)
```
Besides, the `mean_values` and `std_values` of deploy_cfg should be replaced with original normalization settings of `model_cfg`. Let `mean_values=[123.675, 116.28, 103.53]` and `std_values=[58.395, 57.12, 57.375]`.

View File

@ -2,82 +2,82 @@
自测完成的 model-backend 组合: 自测完成的 model-backend 组合:
| Model | Codebase | TorchScript | OnnxRuntime | TensorRT | ncnn | PPLNN | OpenVINO | Ascend | Model config | | Model | Codebase | TorchScript | OnnxRuntime | TensorRT | ncnn | PPLNN | OpenVINO | Ascend | RKNN | Model config |
| :-------------------------- | :--------------- | :---------: | :---------: | :------: | :--: | :---: | :------: | :----: | :---------------------------------------------------------------------------------------------: | | :-------------------------- | :--------------- | :---------: | :---------: | :------: | :--: | :---: | :------: | :----: | :--: | :---------------------------------------------------------------------------------------------: |
| RetinaNet | MMDetection | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) | | RetinaNet | MMDetection | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) |
| Faster R-CNN | MMDetection | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn) | | Faster R-CNN | MMDetection | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn) |
| YOLOv3 | MMDetection | Y | Y | Y | Y | N | Y | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolo) | | YOLOv3 | MMDetection | Y | Y | Y | Y | N | Y | Y | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolo) |
| YOLOX | MMDetection | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolox) | | YOLOX | MMDetection | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolox) |
| FCOS | MMDetection | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos) | | FCOS | MMDetection | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos) |
| FSAF | MMDetection | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf) | | FSAF | MMDetection | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf) |
| Mask R-CNN | MMDetection | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask_rcnn) | | Mask R-CNN | MMDetection | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask_rcnn) |
| SSD[\*](#note) | MMDetection | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd) | | SSD[\*](#note) | MMDetection | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd) |
| FoveaBox | MMDetection | Y | Y | N | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox) | | FoveaBox | MMDetection | Y | Y | N | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/foveabox) |
| ATSS | MMDetection | N | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/atss) | | ATSS | MMDetection | N | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/atss) |
| GFL | MMDetection | N | Y | Y | N | ? | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gfl) | | GFL | MMDetection | N | Y | Y | N | ? | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gfl) |
| Cascade R-CNN | MMDetection | N | Y | Y | N | Y | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cascade_rcnn) | | Cascade R-CNN | MMDetection | N | Y | Y | N | Y | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cascade_rcnn) |
| Cascade Mask R-CNN | MMDetection | N | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cascade_rcnn) | | Cascade Mask R-CNN | MMDetection | N | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cascade_rcnn) |
| Swin Transformer[\*](#note) | MMDetection | N | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/swin) | | Swin Transformer[\*](#note) | MMDetection | N | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/swin) |
| VFNet | MMDetection | N | N | N | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/vfnet) | | VFNet | MMDetection | N | N | N | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/vfnet) |
| RepPoints | MMDetection | N | N | Y | N | ? | Y | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints) | | RepPoints | MMDetection | N | N | Y | N | ? | Y | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/reppoints) |
| DETR | MMDetection | N | Y | Y | N | ? | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detr) | | DETR | MMDetection | N | Y | Y | N | ? | N | N | N | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detr) |
| ResNet | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnet) | | ResNet | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnet) |
| ResNeXt | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnext) | | ResNeXt | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/resnext) |
| SE-ResNet | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/seresnet) | | SE-ResNet | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/seresnet) |
| MobileNetV2 | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/mobilenet_v2) | | MobileNetV2 | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/mobilenet_v2) |
| ShuffleNetV1 | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v1) | | ShuffleNetV1 | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v1) |
| ShuffleNetV2 | MMClassification | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v2) | | ShuffleNetV2 | MMClassification | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/shufflenet_v2) |
| VisionTransformer | MMClassification | Y | Y | Y | Y | ? | Y | Y | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/vision_transformer) | | VisionTransformer | MMClassification | Y | Y | Y | Y | ? | Y | Y | N | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/vision_transformer) |
| SwinTransformer | MMClassification | Y | Y | Y | N | ? | N | ? | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/swin_transformer) | | SwinTransformer | MMClassification | Y | Y | Y | N | ? | N | ? | N | [config](https://github.com/open-mmlab/mmclassification/tree/master/configs/swin_transformer) |
| FCN | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fcn) | | FCN | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fcn) |
| PSPNet[\*static](#note) | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/pspnet) | | PSPNet[\*static](#note) | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/pspnet) |
| DeepLabV3 | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/deeplabv3) | | DeepLabV3 | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/deeplabv3) |
| DeepLabV3+ | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/deeplabv3plus) | | DeepLabV3+ | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/deeplabv3plus) |
| Fast-SCNN[\*static](#note) | MMSegmentation | Y | Y | Y | N | Y | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fastscnn) | | Fast-SCNN[\*static](#note) | MMSegmentation | Y | Y | Y | N | Y | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fastscnn) |
| UNet | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/unet) | | UNet | MMSegmentation | Y | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/unet) |
| ANN[\*](#note) | MMSegmentation | Y | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ann) | | ANN[\*](#note) | MMSegmentation | Y | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ann) |
| APCNet | MMSegmentation | Y | Y | Y | Y | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/apcnet) | | APCNet | MMSegmentation | Y | Y | Y | Y | N | N | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/apcnet) |
| BiSeNetV1 | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/bisenetv1) | | BiSeNetV1 | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/bisenetv1) |
| BiSeNetV2 | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/bisenetv2) | | BiSeNetV2 | MMSegmentation | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/bisenetv2) |
| CGNet | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/cgnet) | | CGNet | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/cgnet) |
| DMNet | MMSegmentation | ? | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/dmnet) | | DMNet | MMSegmentation | ? | Y | N | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/dmnet) |
| DNLNet | MMSegmentation | ? | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/dnlnet) | | DNLNet | MMSegmentation | ? | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/dnlnet) |
| EMANet | MMSegmentation | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/emanet) | | EMANet | MMSegmentation | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/emanet) |
| EncNet | MMSegmentation | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/encnet) | | EncNet | MMSegmentation | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/encnet) |
| ERFNet | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/erfnet) | | ERFNet | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/erfnet) |
| FastFCN | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fastfcn) | | FastFCN | MMSegmentation | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/fastfcn) |
| GCNet | MMSegmentation | Y | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/gcnet) | | GCNet | MMSegmentation | Y | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/gcnet) |
| ICNet[\*](#note) | MMSegmentation | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/icnet) | | ICNet[\*](#note) | MMSegmentation | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/icnet) |
| ISANet | MMSegmentation | ? | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/isanet) | | ISANet[\*static](#note) | MMSegmentation | N | Y | Y | N | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/isanet) |
| NonLocal Net | MMSegmentation | ? | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/nonlocal_net) | | NonLocal Net | MMSegmentation | ? | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/nonlocal_net) |
| OCRNet | MMSegmentation | ? | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ocrnet) | | OCRNet | MMSegmentation | ? | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/ocrnet) |
| PointRend | MMSegmentation | Y | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/point_rend) | | PointRend | MMSegmentation | Y | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/point_rend) |
| Semantic FPN | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/sem_fpn) | | Semantic FPN | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/sem_fpn) |
| STDC | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/stdc) | | STDC | MMSegmentation | Y | Y | Y | Y | N | Y | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/stdc) |
| UPerNet[\*](#note) | MMSegmentation | ? | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/upernet) | | UPerNet[\*](#note) | MMSegmentation | ? | Y | Y | N | N | N | N | Y | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/upernet) |
| DANet | MMSegmentation | ? | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/danet) | | DANet | MMSegmentation | ? | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/danet) |
| Segmenter | MMSegmentation | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/segmenter) | | Segmenter[\*static](#note) | MMSegmentation | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/segmenter) |
| SRCNN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srcnn) | | SRCNN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srcnn) |
| ESRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/esrgan) | | ESRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/esrgan) |
| SRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srresnet_srgan) | | SRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srresnet_srgan) |
| SRResNet | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srresnet_srgan) | | SRResNet | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/srresnet_srgan) |
| Real-ESRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/real_esrgan) | | Real-ESRGAN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/real_esrgan) |
| EDSR | MMEditing | Y | Y | Y | Y | N | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/edsr) | | EDSR | MMEditing | Y | Y | Y | Y | N | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/edsr) |
| RDN | MMEditing | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/rdn) | | RDN | MMEditing | Y | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmediting/tree/master/configs/restorers/rdn) |
| DBNet | MMOCR | Y | Y | Y | Y | Y | Y | Y | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/dbnet) | | DBNet | MMOCR | Y | Y | Y | Y | Y | Y | Y | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/dbnet) |
| PANet | MMOCR | Y | Y | Y | Y | ? | Y | Y | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/panet) | | PANet | MMOCR | Y | Y | Y | Y | ? | Y | Y | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/panet) |
| PSENet | MMOCR | Y | Y | Y | Y | ? | Y | Y | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/psenet) | | PSENet | MMOCR | Y | Y | Y | Y | ? | Y | Y | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/psenet) |
| CRNN | MMOCR | Y | Y | Y | Y | Y | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/crnn) | | CRNN | MMOCR | Y | Y | Y | Y | Y | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/crnn) |
| SAR | MMOCR | N | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/sar) | | SAR | MMOCR | N | Y | N | N | N | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/sar) |
| SATRN | MMOCR | Y | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/satrn) | | SATRN | MMOCR | Y | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmocr/tree/main/configs/textrecog/satrn) |
| HRNet | MMPose | N | Y | Y | Y | N | Y | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#hrnet-cvpr-2019) | | HRNet | MMPose | N | Y | Y | Y | N | Y | N | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#hrnet-cvpr-2019) |
| MSPN | MMPose | N | Y | Y | Y | N | Y | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#mspn-arxiv-2019) | | MSPN | MMPose | N | Y | Y | Y | N | Y | N | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#mspn-arxiv-2019) |
| LiteHRNet | MMPose | N | Y | Y | N | N | Y | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#litehrnet-cvpr-2021) | | LiteHRNet | MMPose | N | Y | Y | N | N | Y | N | N | [config](https://mmpose.readthedocs.io/en/latest/papers/backbones.html#litehrnet-cvpr-2021) |
| PointPillars | MMDetection3d | ? | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars) | | PointPillars | MMDetection3d | ? | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/pointpillars) |
| CenterPoint (pillar) | MMDetection3d | ? | Y | Y | N | N | Y | N | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/centerpoint) | | CenterPoint (pillar) | MMDetection3d | ? | Y | Y | N | N | Y | N | N | [config](https://github.com/open-mmlab/mmdetection3d/blob/master/configs/centerpoint) |
| RotatedRetinaNet | RotatedDetection | N | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/rotated_retinanet/README.md) | | RotatedRetinaNet | RotatedDetection | N | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/rotated_retinanet/README.md) |
| Oriented RCNN | RotatedDetection | N | Y | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/oriented_rcnn/README.md) | | Oriented RCNN | RotatedDetection | N | Y | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/oriented_rcnn/README.md) |
| Gliding Vertex | RotatedDetection | N | N | Y | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/gliding_vertex/README.md) | | Gliding Vertex | RotatedDetection | N | N | Y | N | N | N | N | N | [config](https://github.com/open-mmlab/mmrotate/blob/main/configs/gliding_vertex/README.md) |
## Note ## Note

View File

@ -0,0 +1,11 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmdeploy.backend.rknn import is_available
__all__ = ['is_available']
if is_available():
from mmdeploy.backend.rknn.onnx2rknn import onnx2rknn as _onnx2rknn
from ..core import PIPELINE_MANAGER
onnx2rknn = PIPELINE_MANAGER.register_pipeline()(_onnx2rknn)
__all__ += ['onnx2rknn']

View File

@ -0,0 +1,31 @@
# Copyright (c) OpenMMLab. All rights reserved.
import importlib
import re
import subprocess
def is_available():
"""Check whether rknn is installed.
Returns:
bool: True if rknn package is installed.
"""
return importlib.util.find_spec('rknn') is not None
def device_available():
"""Check whether device available.
Returns:
bool: True if the device is available.
"""
ret = subprocess.check_output('adb devices', shell=True)
match = re.search(r'\\n\w+\\tdevice', str(ret))
return match is not None
__all__ = []
if is_available():
from .wrapper import RKNNWrapper
__all__ += ['RKNNWrapper']

View File

@ -0,0 +1,75 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Optional, Union
import mmcv
from rknn.api import RKNN
from mmdeploy.utils import (get_common_config, get_onnx_config,
get_partition_config, get_quantization_config,
get_root_logger, load_config)
from mmdeploy.utils.config_utils import get_backend_config
def onnx2rknn(onnx_model: str,
output_path: str,
deploy_cfg: Union[str, mmcv.Config],
dataset_file: Optional[str] = None,
**kwargs):
"""Convert ONNX to RKNN.
RKNN-Toolkit2 is a software development kit for users to perform model
conversion, inference and performance evaluation on PC and Rockchip
NPU platforms.
Args:
onnx_model (str): Input onnx model.
output_path (str): File path to save RKNN model.
deploy_cfg (str | mmcv.Config): The path or content of config.
dataset_file (str | None): The dataset file for quatization. Default to
None.
"""
logger = get_root_logger()
# load deploy_cfg if necessary
deploy_cfg = load_config(deploy_cfg)[0]
common_params = get_common_config(deploy_cfg)
onnx_params = get_onnx_config(deploy_cfg)
quantization_cfg = get_quantization_config(deploy_cfg)
input_names = onnx_params.get('input_names', None)
output_names = onnx_params.get('output_names', None)
input_size_list = get_backend_config(deploy_cfg).get(
'input_size_list', None)
# update output_names for partition models
if get_partition_config(deploy_cfg) is not None:
import onnx
_onnx_model = onnx.load(onnx_model)
output_names = [node.name for node in _onnx_model.graph.output]
rknn = RKNN(verbose=True)
rknn.config(**common_params)
ret = rknn.load_onnx(
model=onnx_model,
inputs=input_names,
input_size_list=input_size_list,
outputs=output_names)
if ret != 0:
logger.error('Load model failed!')
exit(ret)
dataset_cfg = quantization_cfg.get('dataset', None)
do_quantization = quantization_cfg.get('do_quantization', False)
if dataset_cfg is None and dataset_file is None:
do_quantization = False
logger.warning('no dataset passed in, quantization is skipped')
if dataset_file is None:
dataset_file = dataset_cfg
ret = rknn.build(do_quantization=do_quantization, dataset=dataset_file)
if ret != 0:
logger.error('Build model failed!')
exit(ret)
ret = rknn.export_rknn(output_path)
if ret != 0:
logger.error('Export rknn model failed!')
exit(ret)

View File

@ -0,0 +1,69 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Dict, Optional, Sequence
import numpy as np
import torch
from rknn.api import RKNN
from mmdeploy.utils import Backend, get_root_logger
from mmdeploy.utils.timer import TimeCounter
from ..base import BACKEND_WRAPPER, BaseWrapper
@BACKEND_WRAPPER.register_module(Backend.RKNN.value)
class RKNNWrapper(BaseWrapper):
"""RKNN wrapper for inference.
Args:
model (str): Path of input RKNN model file.
common_config (Dict): Config args for RKNN.
output_names (Sequence[str]): Output names of the model.
verbose (bool): Whether verbose during inference.
Examples:
>>> from mmdeploy.backend.rknn import RKNNWrapper
>>> import torch
>>>
>>> model = 'model.rknn'
>>> model = RKNNWrapper(model)
>>> inputs = dict(input=torch.randn(1, 3, 224, 224))
>>> outputs = model(inputs)
>>> print(outputs)
"""
def __init__(self,
model: str,
common_config: Dict = dict(target_platform=None),
output_names: Optional[Sequence[str]] = None,
verbose=True,
**kwargs):
logger = get_root_logger()
# Create RKNN object
self.rknn = RKNN(verbose=verbose)
self.rknn.load_rknn(model)
ret = self.rknn.init_runtime(target=common_config['target_platform'])
if ret != 0:
logger.error('Init runtime environment failed!')
exit(ret)
super().__init__(output_names)
def forward(self, inputs: Dict[str,
torch.Tensor]) -> Sequence[torch.Tensor]:
"""Run forward inference. Note that the shape of the input tensor is
NxCxHxW while RKNN only accepts the numpy inputs of NxHxWxC. There is a
permute operation outside RKNN inference.
Args:
inputs (Dict[str, torch.Tensor]): Input name and tensor pairs.
Return:
Sequence[torch.Tensor]: The output tensors.
"""
rknn_out = self.__rknnnn_execute(
[i.permute(0, 2, 3, 1).cpu().numpy() for i in inputs.values()])
return [torch.from_numpy(out) for out in rknn_out]
@TimeCounter.count_time(Backend.RKNN.value)
def __rknnnn_execute(self, inputs: Sequence[np.array]):
"""Run inference with RKNN."""
return self.rknn.inference(inputs)

View File

@ -135,6 +135,8 @@ def get_models(deploy_cfg: Union[str, mmcv.Config],
net = replace_suffix(ir_name, '.om') net = replace_suffix(ir_name, '.om')
elif backend == Backend.SNPE: elif backend == Backend.SNPE:
net = replace_suffix(ir_name, '.dlc') net = replace_suffix(ir_name, '.dlc')
elif backend == Backend.RKNN:
net = replace_suffix(ir_name, '.rknn')
elif backend in [Backend.ONNXRUNTIME, Backend.TORCHSCRIPT]: elif backend in [Backend.ONNXRUNTIME, Backend.TORCHSCRIPT]:
pass pass
elif backend == Backend.COREML: elif backend == Backend.COREML:

View File

@ -6,7 +6,7 @@ import mmcv
import torch import torch
from mmdeploy.utils import (SDK_TASK_MAP, Backend, get_backend_config, from mmdeploy.utils import (SDK_TASK_MAP, Backend, get_backend_config,
get_ir_config, get_task_type) get_common_config, get_ir_config, get_task_type)
class BaseBackendModel(torch.nn.Module, metaclass=ABCMeta): class BaseBackendModel(torch.nn.Module, metaclass=ABCMeta):
@ -106,6 +106,13 @@ class BaseBackendModel(torch.nn.Module, metaclass=ABCMeta):
model=backend_files[0], model=backend_files[0],
input_names=input_names, input_names=input_names,
output_names=output_names) output_names=output_names)
elif backend == Backend.RKNN:
from mmdeploy.backend.rknn import RKNNWrapper
common_config = get_common_config(deploy_cfg)
return RKNNWrapper(
model=backend_files[0],
common_config=common_config,
output_names=output_names)
elif backend == Backend.ASCEND: elif backend == Backend.ASCEND:
from mmdeploy.backend.ascend import AscendWrapper from mmdeploy.backend.ascend import AscendWrapper
return AscendWrapper(model=backend_files[0], device=device) return AscendWrapper(model=backend_files[0], device=device)

View File

@ -144,6 +144,25 @@ class SDKEnd2EndModel(End2EndModel):
return pred[np.argsort(pred[:, 0])][np.newaxis, :, 1] return pred[np.argsort(pred[:, 0])][np.newaxis, :, 1]
@__BACKEND_MODEL.register_module('rknn')
class RKNNEnd2EndModel(End2EndModel):
"""RKNN inference class, converts RKNN output to mmcls format."""
def forward_test(self, imgs: torch.Tensor, *args, **kwargs) -> \
List[np.ndarray]:
"""The interface for forward test.
Args:
imgs (torch.Tensor): Input image(s) in [N x C x H x W] format.
Returns:
List[np.ndarray]: A list of classification prediction.
"""
outputs = self.wrapper({self.input_name: imgs})
outputs = [out.numpy() for out in outputs]
return outputs
def get_classes_from_config(model_cfg: Union[str, mmcv.Config]): def get_classes_from_config(model_cfg: Union[str, mmcv.Config]):
"""Get class name from config. """Get class name from config.

View File

@ -8,6 +8,9 @@ from mmdeploy.utils.constants import Backend
@FUNCTION_REWRITER.register_rewriter( @FUNCTION_REWRITER.register_rewriter(
func_name='mmdet.core.anchor.MlvlPointGenerator.single_level_grid_priors', func_name='mmdet.core.anchor.MlvlPointGenerator.single_level_grid_priors',
backend=Backend.TENSORRT.value) backend=Backend.TENSORRT.value)
@FUNCTION_REWRITER.register_rewriter(
func_name='mmdet.core.anchor.MlvlPointGenerator.single_level_grid_priors',
backend=Backend.RKNN.value)
def mlvl_point_generator__single_level_grid_priors__tensorrt( def mlvl_point_generator__single_level_grid_priors__tensorrt(
ctx, ctx,
self, self,

View File

@ -657,6 +657,60 @@ class SDKEnd2EndModel(End2EndModel):
return [det_results] return [det_results]
@__BACKEND_MODEL.register_module('rknn')
class RKNNModel(End2EndModel):
"""RKNNModel.
RKNN inference class, converts RKNN output to mmdet format.
"""
def __init__(self, backend: Backend, backend_files: Sequence[str],
device: str, class_names: Sequence[str],
model_cfg: Union[str, mmcv.Config],
deploy_cfg: Union[str, mmcv.Config], **kwargs):
assert backend == Backend.RKNN, f'only supported RKNN, but give \
{backend.value}'
super(RKNNModel, self).__init__(backend, backend_files, device,
class_names, deploy_cfg, **kwargs)
# load cfg if necessary
model_cfg = load_config(model_cfg)[0]
self.model_cfg = model_cfg
def _get_bboxes(self, outputs):
from mmdet.models import build_head
head_cfg = self.model_cfg._cfg_dict.model.bbox_head
head = build_head(head_cfg)
if head_cfg.type == 'YOLOXHead':
ret = head.get_bboxes(
outputs[:3],
outputs[3:6],
outputs[6:9], [dict(scale_factor=None)],
cfg=self.model_cfg._cfg_dict.model.test_cfg)
elif head_cfg.type == 'YOLOV3Head':
ret = head.get_bboxes(
outputs, [dict(scale_factor=None)],
cfg=self.model_cfg._cfg_dict.model.test_cfg)
else:
raise NotImplementedError(f'{head_cfg.type} not supported yet.')
ret = [r.unsqueeze(0).cpu() for r in ret[0]]
return ret
def forward_test(self, imgs: torch.Tensor, *args, **kwargs):
"""Implement forward test.
Args:
imgs (torch.Tensor): Input image(s) in [N x C x H x W] format.
Returns:
list[np.ndarray, np.ndarray]: dets of shape [N, num_det, 5] and
class labels of shape [N, num_det].
"""
outputs = self.wrapper({self.input_name: imgs})
ret = self._get_bboxes(outputs)
return ret
def get_classes_from_config(model_cfg: Union[str, mmcv.Config], **kwargs) -> \ def get_classes_from_config(model_cfg: Union[str, mmcv.Config], **kwargs) -> \
List[str]: List[str]:
"""Get class name from config. The class name is the `classes` field if it """Get class name from config. The class name is the `classes` field if it

View File

@ -2,7 +2,7 @@
import torch import torch
from mmdeploy.codebase.mmdet import get_post_processing_params, multiclass_nms from mmdeploy.codebase.mmdet import get_post_processing_params, multiclass_nms
from mmdeploy.core import FUNCTION_REWRITER from mmdeploy.core import FUNCTION_REWRITER, mark
@FUNCTION_REWRITER.register_rewriter( @FUNCTION_REWRITER.register_rewriter(
@ -48,6 +48,13 @@ def yolox_head__get_bboxes(ctx,
tensor in the tuple is (N, num_box), and each element tensor in the tuple is (N, num_box), and each element
represents the class label of the corresponding box. represents the class label of the corresponding box.
""" """
# mark pred_maps
@mark('yolo_head', inputs=['cls_scores', 'bbox_preds', 'objectnesses'])
def __mark_pred_maps(cls_scores, bbox_preds, objectnesses):
return cls_scores, bbox_preds, objectnesses
cls_scores, bbox_preds, objectnesses = __mark_pred_maps(
cls_scores, bbox_preds, objectnesses)
assert len(cls_scores) == len(bbox_preds) == len(objectnesses) assert len(cls_scores) == len(bbox_preds) == len(objectnesses)
device = cls_scores[0].device device = cls_scores[0].device
cfg = self.test_cfg if cfg is None else cfg cfg = self.test_cfg if cfg is None else cfg
@ -74,10 +81,10 @@ def yolox_head__get_bboxes(ctx,
score_factor = torch.cat(flatten_objectness, dim=1).sigmoid() score_factor = torch.cat(flatten_objectness, dim=1).sigmoid()
flatten_bbox_preds = torch.cat(flatten_bbox_preds, dim=1) flatten_bbox_preds = torch.cat(flatten_bbox_preds, dim=1)
flatten_priors = torch.cat(mlvl_priors) flatten_priors = torch.cat(mlvl_priors)
bboxes = self._bbox_decode(flatten_priors, flatten_bbox_preds) bboxes = self._bbox_decode(flatten_priors, flatten_bbox_preds)
# directly multiply score factor and feed to nms # directly multiply score factor and feed to nms
scores = cls_scores * (score_factor.unsqueeze(-1)) scores = cls_scores * (score_factor.unsqueeze(-1))
if not with_nms: if not with_nms:
return bboxes, scores return bboxes, scores

View File

@ -147,6 +147,26 @@ class End2EndModel(BaseBackendModel):
out_file=out_file) out_file=out_file)
@__BACKEND_MODEL.register_module('rknn')
class RKNNModel(End2EndModel):
"""SDK inference class, converts RKNN output to mmseg format."""
def forward_test(self, imgs: torch.Tensor, *args, **kwargs) -> \
List[np.ndarray]:
"""The interface for forward test.
Args:
imgs (torch.Tensor): Input image(s) in [N x C x H x W] format.
Returns:
List[np.ndarray]: A list of segmentation map.
"""
outputs = self.wrapper({self.input_name: imgs})
outputs = [output.argmax(dim=1, keepdim=True) for output in outputs]
outputs = [out.detach().cpu().numpy() for out in outputs]
return outputs
@__BACKEND_MODEL.register_module('sdk') @__BACKEND_MODEL.register_module('sdk')
class SDKEnd2EndModel(End2EndModel): class SDKEnd2EndModel(End2EndModel):
"""SDK inference class, converts SDK output to mmseg format.""" """SDK inference class, converts SDK output to mmseg format."""

View File

@ -1,5 +1,9 @@
# Copyright (c) OpenMMLab. All rights reserved. # Copyright (c) OpenMMLab. All rights reserved.
from .base import base_segmentor__forward from .base import base_segmentor__forward
from .encoder_decoder import encoder_decoder__simple_test from .encoder_decoder import (encoder_decoder__simple_test,
encoder_decoder__simple_test__rknn)
__all__ = ['base_segmentor__forward', 'encoder_decoder__simple_test'] __all__ = [
'base_segmentor__forward', 'encoder_decoder__simple_test',
'encoder_decoder__simple_test__rknn'
]

View File

@ -2,6 +2,7 @@
import torch.nn.functional as F import torch.nn.functional as F
from mmdeploy.core import FUNCTION_REWRITER from mmdeploy.core import FUNCTION_REWRITER
from mmdeploy.utils.constants import Backend
@FUNCTION_REWRITER.register_rewriter( @FUNCTION_REWRITER.register_rewriter(
@ -26,3 +27,26 @@ def encoder_decoder__simple_test(ctx, self, img, img_meta, **kwargs):
seg_logit = F.softmax(seg_logit, dim=1) seg_logit = F.softmax(seg_logit, dim=1)
seg_pred = seg_logit.argmax(dim=1, keepdim=True) seg_pred = seg_logit.argmax(dim=1, keepdim=True)
return seg_pred return seg_pred
@FUNCTION_REWRITER.register_rewriter(
func_name='mmseg.models.segmentors.EncoderDecoder.simple_test',
backend=Backend.RKNN.value)
def encoder_decoder__simple_test__rknn(ctx, self, img, img_meta, **kwargs):
"""Rewrite `simple_test` for RKNN backend.
Early return to avoid argmax operator.
Args:
ctx (ContextCaller): The context with additional information.
self: The instance of the original class.
img (Tensor | List[Tensor]): Input image tensor(s).
img_meta (dict): Dict containing image's meta information
such as `img_shape`.
Returns:
torch.Tensor: Output segmentation map pf shape [N, C, H, W].
"""
seg_logit = self.encode_decode(img, img_meta)
seg_logit = F.softmax(seg_logit, dim=1)
return seg_logit

View File

@ -22,8 +22,8 @@ if importlib.util.find_spec('mmcv') is not None:
get_dynamic_axes, get_input_shape, get_dynamic_axes, get_input_shape,
get_ir_config, get_model_inputs, get_ir_config, get_model_inputs,
get_onnx_config, get_partition_config, get_onnx_config, get_partition_config,
get_task_type, is_dynamic_batch, get_quantization_config, get_task_type,
is_dynamic_shape, load_config) is_dynamic_batch, is_dynamic_shape, load_config)
# yapf: enable # yapf: enable
@ -32,6 +32,6 @@ if importlib.util.find_spec('mmcv') is not None:
'get_calib_config', 'get_calib_filename', 'get_codebase', 'get_calib_config', 'get_calib_filename', 'get_codebase',
'get_codebase_config', 'get_common_config', 'get_dynamic_axes', 'get_codebase_config', 'get_common_config', 'get_dynamic_axes',
'get_input_shape', 'get_ir_config', 'get_model_inputs', 'get_input_shape', 'get_ir_config', 'get_model_inputs',
'get_onnx_config', 'get_partition_config', 'get_task_type', 'get_onnx_config', 'get_partition_config', 'get_quantization_config',
'is_dynamic_batch', 'is_dynamic_shape', 'load_config' 'get_task_type', 'is_dynamic_batch', 'is_dynamic_shape', 'load_config'
] ]

View File

@ -329,6 +329,20 @@ def get_common_config(deploy_cfg: Union[str, mmcv.Config]) -> Dict:
return model_params return model_params
def get_quantization_config(deploy_cfg: Union[str, mmcv.Config]) -> Dict:
"""Get quantization parameters from config.
Args:
deploy_cfg (str | mmcv.Config): The path or content of config.
Returns:
dict: A dict of quantization parameters for a model.
"""
backend_config = deploy_cfg['backend_config']
model_params = backend_config.get('quantization_config', dict())
return model_params
def get_model_inputs(deploy_cfg: Union[str, mmcv.Config]) -> List[Dict]: def get_model_inputs(deploy_cfg: Union[str, mmcv.Config]) -> List[Dict]:
"""Get model input parameters from config. """Get model input parameters from config.

View File

@ -59,6 +59,7 @@ class Backend(AdvancedEnum):
OPENVINO = 'openvino' OPENVINO = 'openvino'
SDK = 'sdk' SDK = 'sdk'
TORCHSCRIPT = 'torchscript' TORCHSCRIPT = 'torchscript'
RKNN = 'rknn'
ASCEND = 'ascend' ASCEND = 'ascend'
COREML = 'coreml' COREML = 'coreml'
DEFAULT = 'default' DEFAULT = 'default'

View File

@ -46,6 +46,9 @@ def backend_checker(backend: Backend, require_plugin: bool = False):
from mmdeploy.apis.ncnn import is_custom_ops_available from mmdeploy.apis.ncnn import is_custom_ops_available
elif backend == Backend.OPENVINO: elif backend == Backend.OPENVINO:
from mmdeploy.apis.openvino import is_available from mmdeploy.apis.openvino import is_available
elif backend == Backend.RKNN:
# device not require as backend is not really running
from mmdeploy.apis.rknn import is_available
elif backend == Backend.ASCEND: elif backend == Backend.ASCEND:
from mmdeploy.apis.ascend import is_available from mmdeploy.apis.ascend import is_available
else: else:
@ -98,6 +101,13 @@ def check_backend(backend: Backend, require_plugin: bool = False):
from mmdeploy.apis.openvino import is_available from mmdeploy.apis.openvino import is_available
elif backend == Backend.TORCHSCRIPT: elif backend == Backend.TORCHSCRIPT:
from mmdeploy.backend.torchscript import ops_available as is_available from mmdeploy.backend.torchscript import ops_available as is_available
elif backend == Backend.RKNN:
from mmdeploy.backend.rknn import is_available
if not is_available():
# skip CI in github
pytest.skip(f'{backend.value} package is not available')
# device required
from mmdeploy.backend.rknn import device_available as is_available
elif backend == Backend.ASCEND: elif backend == Backend.ASCEND:
from mmdeploy.backend.ascend import is_available from mmdeploy.backend.ascend import is_available
else: else:

View File

@ -0,0 +1,67 @@
# Copyright (c) OpenMMLab. All rights reserved.
import os.path as osp
import tempfile
import mmcv
import pytest
import torch
import torch.nn as nn
from mmdeploy.utils import Backend
from mmdeploy.utils.test import backend_checker
onnx_file = tempfile.NamedTemporaryFile(suffix='.onnx').name
test_img = torch.rand([1, 3, 8, 8])
@pytest.mark.skip(reason='This a not test class but a utility class.')
class TestModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x * 0.5
test_model = TestModel().eval()
def generate_onnx_file(model):
with torch.no_grad():
torch.onnx.export(
model,
test_img,
onnx_file,
output_names=['output'],
input_names=['input'],
keep_initializers_as_inputs=True,
do_constant_folding=True,
verbose=False,
opset_version=11)
assert osp.exists(onnx_file)
def get_deploy_cfg():
deploy_cfg = mmcv.Config(
dict(
backend_config=dict(
type='rknn',
common_config=dict(),
quantization_config=dict(do_quantization=False, dataset=None),
input_size_list=[[3, 8, 8]])))
return deploy_cfg
@backend_checker(Backend.RKNN)
def test_onnx2rknn():
from mmdeploy.backend.rknn.onnx2rknn import onnx2rknn
model = test_model
generate_onnx_file(model)
work_dir, _ = osp.split(onnx_file)
rknn_file = onnx_file.replace('.onnx', '.rknn')
deploy_cfg = get_deploy_cfg()
onnx2rknn(onnx_file, rknn_file, deploy_cfg)
assert osp.exists(work_dir)
assert osp.exists(rknn_file)

View File

@ -15,6 +15,7 @@ ts_file = tempfile.NamedTemporaryFile(suffix='.pt').name
test_img = torch.rand(1, 3, 8, 8) test_img = torch.rand(1, 3, 8, 8)
output_names = ['output'] output_names = ['output']
input_names = ['input'] input_names = ['input']
target_platform = 'rk3588' # rknn pre-compiled model need device
@pytest.mark.skip(reason='This a not test class but a utility class.') @pytest.mark.skip(reason='This a not test class but a utility class.')
@ -103,6 +104,21 @@ def onnx2backend(backend, onnx_file):
work_dir = backend_dir work_dir = backend_dir
from_onnx(onnx_file, work_dir, input_info, output_names) from_onnx(onnx_file, work_dir, input_info, output_names)
return backend_file return backend_file
elif backend == Backend.RKNN:
import mmcv
from mmdeploy.apis.rknn import onnx2rknn
rknn_file = onnx_file.replace('.onnx', '.rknn')
deploy_cfg = mmcv.Config(
dict(
backend_config=dict(
type='rknn',
common_config=dict(target_platform=target_platform),
quantization_config=dict(
do_quantization=False, dataset=None),
input_size_list=[[3, 8, 8]])))
onnx2rknn(onnx_file, rknn_file, deploy_cfg)
return rknn_file
elif backend == Backend.ASCEND: elif backend == Backend.ASCEND:
import mmcv import mmcv
@ -145,6 +161,13 @@ def create_wrapper(backend, model_files):
torchscript_model = TorchscriptWrapper( torchscript_model = TorchscriptWrapper(
model_files, input_names=input_names, output_names=output_names) model_files, input_names=input_names, output_names=output_names)
return torchscript_model return torchscript_model
elif backend == Backend.RKNN:
from mmdeploy.backend.rknn import RKNNWrapper
rknn_model = RKNNWrapper(
model_files,
common_config=dict(target_platform=target_platform),
output_names=output_names)
return rknn_model
elif backend == Backend.ASCEND: elif backend == Backend.ASCEND:
from mmdeploy.backend.ascend import AscendWrapper from mmdeploy.backend.ascend import AscendWrapper
ascend_model = AscendWrapper(model_files) ascend_model = AscendWrapper(model_files)
@ -179,6 +202,8 @@ def run_wrapper(backend, wrapper, input):
elif backend == Backend.TORCHSCRIPT: elif backend == Backend.TORCHSCRIPT:
results = wrapper({'input': input})['output'] results = wrapper({'input': input})['output']
return results return results
elif backend == Backend.RKNN:
results = wrapper({'input': input})
elif backend == Backend.ASCEND: elif backend == Backend.ASCEND:
results = wrapper({'input': input})['output'] results = wrapper({'input': input})['output']
return results return results
@ -188,7 +213,7 @@ def run_wrapper(backend, wrapper, input):
ALL_BACKEND = [ ALL_BACKEND = [
Backend.TENSORRT, Backend.ONNXRUNTIME, Backend.PPLNN, Backend.NCNN, Backend.TENSORRT, Backend.ONNXRUNTIME, Backend.PPLNN, Backend.NCNN,
Backend.OPENVINO, Backend.TORCHSCRIPT, Backend.ASCEND Backend.OPENVINO, Backend.TORCHSCRIPT, Backend.ASCEND, Backend.RKNN
] ]

View File

@ -77,6 +77,44 @@ class TestEnd2EndModel:
assert osp.exists(img_path), 'Fails to create drawn image.' assert osp.exists(img_path), 'Fails to create drawn image.'
@backend_checker(Backend.RKNN)
class TestRKNNEnd2EndModel:
@classmethod
def setup_class(cls):
# force add backend wrapper regardless of plugins
import mmdeploy.backend.rknn as rknn_apis
from mmdeploy.backend.rknn import RKNNWrapper
rknn_apis.__dict__.update({'RKNNWrapper': RKNNWrapper})
# simplify backend inference
cls.wrapper = SwitchBackendWrapper(RKNNWrapper)
cls.outputs = [torch.rand(1, 1, IMAGE_SIZE, IMAGE_SIZE)]
cls.wrapper.set(outputs=cls.outputs)
deploy_cfg = mmcv.Config({
'onnx_config': {
'output_names': ['outputs']
},
'backend_config': {
'common_config': {}
}
})
from mmdeploy.codebase.mmcls.deploy.classification_model import \
RKNNEnd2EndModel
class_names = ['' for i in range(NUM_CLASS)]
cls.end2end_model = RKNNEnd2EndModel(
Backend.RKNN, [''],
device='cpu',
class_names=class_names,
deploy_cfg=deploy_cfg)
def test_forward_test(self):
imgs = torch.rand(2, 3, IMAGE_SIZE, IMAGE_SIZE)
results = self.end2end_model.forward_test(imgs)
assert isinstance(results[0], np.ndarray)
@pytest.mark.parametrize('from_file', [True, False]) @pytest.mark.parametrize('from_file', [True, False])
@pytest.mark.parametrize('data_type', ['train', 'val', 'test']) @pytest.mark.parametrize('data_type', ['train', 'val', 'test'])
def test_get_classes_from_config(from_file, data_type): def test_get_classes_from_config(from_file, data_type):

View File

@ -509,3 +509,86 @@ class TestNCNNEnd2EndModel:
imgs = torch.rand(1, 3, 64, 64) imgs = torch.rand(1, 3, 64, 64)
results = self.ncnn_end2end_model.forward_test(imgs) results = self.ncnn_end2end_model.forward_test(imgs)
assert_det_results(results, 'NCNNEnd2EndModel') assert_det_results(results, 'NCNNEnd2EndModel')
@backend_checker(Backend.RKNN)
class TestRKNNModel:
@classmethod
def setup_class(cls):
# force add backend wrapper regardless of plugins
import mmdeploy.backend.rknn as rknn_apis
from mmdeploy.backend.rknn import RKNNWrapper
rknn_apis.__dict__.update({'RKNNWrapper': RKNNWrapper})
# simplify backend inference
cls.wrapper = SwitchBackendWrapper(RKNNWrapper)
cls.outputs = [
torch.rand(1, 255, 5, 5),
torch.rand(1, 255, 10, 10),
torch.rand(1, 255, 20, 20)
]
cls.wrapper.set(outputs=cls.outputs)
deploy_cfg = mmcv.Config({
'onnx_config': {
'output_names': ['output']
},
'backend_config': {
'common_config': {}
}
})
model_cfg = mmcv.Config(
dict(
model=dict(
bbox_head=dict(
type='YOLOV3Head',
num_classes=80,
in_channels=[512, 256, 128],
out_channels=[1024, 512, 256],
anchor_generator=dict(
type='YOLOAnchorGenerator',
base_sizes=[[(116, 90), (156, 198), (
373, 326)], [(30, 61), (62, 45), (
59, 119)], [(10, 13), (16, 30), (33, 23)]],
strides=[32, 16, 8]),
bbox_coder=dict(type='YOLOBBoxCoder'),
featmap_strides=[32, 16, 8],
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=1.0,
reduction='sum'),
loss_conf=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=1.0,
reduction='sum'),
loss_xy=dict(
type='CrossEntropyLoss',
use_sigmoid=True,
loss_weight=2.0,
reduction='sum'),
loss_wh=dict(
type='MSELoss', loss_weight=2.0, reduction='sum')),
test_cfg=dict(
nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
conf_thr=0.005,
nms=dict(type='nms', iou_threshold=0.45),
max_per_img=100))))
from mmdeploy.codebase.mmdet.deploy.object_detection_model import \
RKNNModel
cls.rknn_model = RKNNModel(Backend.RKNN, ['', ''], 'cpu',
['' for i in range(80)], model_cfg,
deploy_cfg)
@classmethod
def teardown_class(cls):
cls.wrapper.recover()
def test_forward_test(self):
imgs = torch.rand(1, 3, 64, 64)
results = self.rknn_model.forward_test(imgs)
assert_det_results(results, 'RKNNWrapper')

View File

@ -85,6 +85,45 @@ class TestEnd2EndModel:
assert osp.exists(img_path), 'Fails to create drawn image.' assert osp.exists(img_path), 'Fails to create drawn image.'
@backend_checker(Backend.RKNN)
class TestRKNNModel:
@classmethod
def setup_class(cls):
# force add backend wrapper regardless of plugins
import mmdeploy.backend.rknn as rknn_apis
from mmdeploy.backend.rknn import RKNNWrapper
rknn_apis.__dict__.update({'RKNNWrapper': RKNNWrapper})
# simplify backend inference
cls.wrapper = SwitchBackendWrapper(RKNNWrapper)
cls.outputs = [torch.rand(1, 19, IMAGE_SIZE, IMAGE_SIZE)]
cls.wrapper.set(outputs=cls.outputs)
deploy_cfg = mmcv.Config({
'onnx_config': {
'output_names': ['outputs']
},
'backend_config': {
'common_config': {}
}
})
from mmdeploy.codebase.mmseg.deploy.segmentation_model import RKNNModel
class_names = ['' for i in range(NUM_CLASS)]
palette = np.random.randint(0, 255, size=(NUM_CLASS, 3))
cls.rknn_model = RKNNModel(
Backend.RKNN, [''],
device='cpu',
class_names=class_names,
palette=palette,
deploy_cfg=deploy_cfg)
def test_forward_test(self):
imgs = torch.rand(2, 3, IMAGE_SIZE, IMAGE_SIZE)
results = self.rknn_model.forward_test(imgs)
assert isinstance(results[0], np.ndarray)
@pytest.mark.parametrize('from_file', [True, False]) @pytest.mark.parametrize('from_file', [True, False])
@pytest.mark.parametrize('data_type', ['train', 'val', 'test']) @pytest.mark.parametrize('data_type', ['train', 'val', 'test'])
def test_get_classes_palette_from_config(from_file, data_type): def test_get_classes_palette_from_config(from_file, data_type):

View File

@ -98,7 +98,7 @@ def torch2ir(ir_type: IR):
def main(): def main():
args = parse_args() args = parse_args()
set_start_method('spawn') set_start_method('spawn', force=True)
logger = get_root_logger() logger = get_root_logger()
log_level = logging.getLevelName(args.log_level) log_level = logging.getLevelName(args.log_level)
logger.setLevel(log_level) logger.setLevel(log_level)
@ -351,6 +351,30 @@ def main():
pplnn_files += [onnx_path, algo_file] pplnn_files += [onnx_path, algo_file]
backend_files = pplnn_files backend_files = pplnn_files
elif backend == Backend.RKNN:
from mmdeploy.apis.rknn import is_available as rknn_is_available
assert rknn_is_available(
), 'RKNN is not available, please install RKNN first.'
from mmdeploy.apis.rknn import onnx2rknn
PIPELINE_MANAGER.enable_multiprocess(True, [onnx2rknn])
PIPELINE_MANAGER.set_log_level(logging.INFO, [onnx2rknn])
backend_files = []
for model_id, onnx_path in zip(range(len(ir_files)), ir_files):
pre_fix_name = osp.splitext(osp.split(onnx_path)[1])[0]
output_path = osp.join(args.work_dir, pre_fix_name + '.rknn')
import tempfile
dataset_file = tempfile.NamedTemporaryFile(suffix='.txt').name
with open(dataset_file, 'w') as f:
f.writelines([osp.abspath(args.img)])
onnx2rknn(
onnx_path,
output_path,
deploy_cfg_path,
dataset_file=dataset_file)
backend_files.append(output_path)
elif backend == Backend.ASCEND: elif backend == Backend.ASCEND:
from mmdeploy.apis.ascend import from_onnx from mmdeploy.apis.ascend import from_onnx