mmdeploy/tests/test_backend/test_wrapper.py
lvhan028 9306bcec80
Dev v0.4.0 (#301)
* bump version to v0.4.0

* [Enhancement] Make rewriter more powerful (#150)

* Finish function tests

* lint

* resolve comments

* Fix tests

* docstring & fix

* Complement informations

* lint

* Add example

* Fix version

* Remove todo

Co-authored-by: RunningLeon <mnsheng@yeah.net>

* Torchscript support (#159)

* support torchscript

* add nms

* add torchscript configs and update deploy process and dump-info

* typescript -> torchscript

* add torchscript custom extension support

* add ts custom ops again

* support mmseg unet

* [WIP] add optimizer for torchscript (#119)

* add passes

* add python api

* Torchscript optimizer python api (#121)

* add passes

* add python api

* use python api instead of executable

* Merge Master, update optimizer (#151)

* [Feature] add yolox ncnn (#29)

* add yolox ncnn

* add ncnn android performance of yolox

* add ut

* fix lint

* fix None bugs for ncnn

* test codecov

* test codecov

* add device

* fix yapf

* remove if-else for img shape

* use channelshuffle optimize

* change benchmark after channelshuffle

* fix yapf

* fix yapf

* fuse continuous reshape

* fix static shape deploy

* fix code

* drop pad

* only static shape

* fix static

* fix docstring

* Added mask overlay to output image, changed fprintf info messages to … (#55)

* Added mask overlay to output image, changed fprintf info messages to stdout

* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds

* clang-format

* Support UNet in mmseg (#77)

* Repeatdataset in train has no CLASSES & PALETTE

* update result for unet

* update docstring for mmdet

* remove ppl for unet in docs

* fix ort wrap about input type (#81)

* Fix memleak (#86)

* delete []

* fix build error when enble MMDEPLOY_ACTIVE_LEVEL

* fix lint

* [Doc] Nano benchmark and tutorial (#71)

* add cls benchmark

* add nano zh-cn benchmark and en tutorial

* add device row

* add doc path to index.rst

* fix typo

* [Fix] fix missing deploy_core (#80)

* fix missing deploy_core

* mv flag to demo

* target link

* [Docs] Fix links in Chinese doc (#84)

* Fix docs in Chinese link

* Fix links

* Delete symbolic link and add links to html

* delete files

* Fix link

* [Feature] Add docker files (#67)

* add gpu and cpu dockerfile

* fix lint

* fix cpu docker and remove redundant

* use pip instead

* add build arg and readme

* fix grammar

* update readme

* add chinese doc for dockerfile and add docker build to build.md

* grammar

* refine dockerfiles

* add FAQs

* update Dpplcv_DIR for SDK building

* remove mmcls

* add sdk demos

* fix typo and lint

* update FAQs

* [Fix]fix check_env (#101)

* fix check_env

* update

* Replace convert_syncbatchnorm in mmseg (#93)

* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv

* change logger

* [Doc] Update FAQ for TensorRT (#96)

* update FAQ

* comment

* [Docs]: Update doc for openvino installation (#102)

* fix docs

* fix docs

* fix docs

* fix mmcv version

* fix docs

* rm blank line

* simplify non batch nms (#99)

* [Enhacement] Allow test.py to save evaluation results (#108)

* Add log file

* Delete debug code

* Rename logger

* resolve comments

* [Enhancement] Support mmocr v0.4+ (#115)

* support mmocr v0.4+

* 0.4.0 -> 0.4.1

* fix onnxruntime wrapper for gpu inference (#123)

* fix ncnn wrapper for ort-gpu

* resolve comment

* fix lint

* Fix typo (#132)

* lock mmcls version (#131)

* [Enhancement] upgrade isort in pre-commit config (#141)

* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87

* fix lint

* remove .isort.cfg and put its known_third_party to setup.cfg

* Fix ci for mmocr (#144)

* fix mmocr unittests

* remove useless

* lock mmdet maximum version to 2.20

* pip install -U numpy

* Fix capture_output (#125)

Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>

* configs for all tasks

* use torchvision roi align

* remote unnecessary code

* fix ut

* fix ut

* export

* det dynamic

* det dynamic

* add ut

* fix ut

* add ut and docs

* fix ut

* skip torchscript ut if no ops available

* add torchscript option to build.md

* update benchmark and resolve comments

* resolve conflicts

* rename configs

* fix mrcnn cuda test

* remove useless

* add version requirements to docs and comments to codes

* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN

* rebase

* update example for torchscript.md

* update FAQs for torchscript.md

* resolve comments

* only use torchvision roi_align for torchscript

* fix ut

* use torchvision roi align when pool model is avg

* resolve comments

Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>

* Update supported mmseg models (#181)

* fix ocrnet cascade decoder

* update mmseg support models

* update mmseg configs

* support emanet and icnet

* set max K of TopK for tensorrt

* update supported models for mmseg in docs

* add test for emamodule

* add configs and update docs

* Update docs

* update benchmark

* [Features]Support mmdet3d (#103)

* add mmdet3d code

* add code

* update code

* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model

* add tensorrt config

* fix config

* update

* support for tensorrt

* add config

* fix config`

* fix apis about torch2onnx

* update

* mmdet3d deploy version1.0

* map is ok

* fix code

* version1.0

* fix code

* fix visual

* fix bug

* tensorrt support success

* add docstring

* add docs

* fix docs

* fix comments

* fix comment

* fix comment

* fix openvino wrapper

* add unit test

* fix device about cpu

* fix comment

* fix show_result

* fix lint

* fix requirments

* remove ci about det3d

* fix ut

* add ut data

* support for new version pointpillars

* fix comment

* fix support_list

* fix comments

* fix config name

* [Enhancement] Update pad logic in detection heads (#168)

* pad with register

* fix lint

Co-authored-by: AllentDan <dongchunyu@sensetime.com>

* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)

* Add mo args.

* [Docs]: update docs and argument descriptions (#196)

* bump version to v0.4.0

* update docs and argument descriptions

* revert version change

* fix unnecessary change of config for dynamic exportation (#199)

* fix mmcls get classes (#215)

* fix mmcls get classes

* resolve comment

* resolve comment

* Add ModelOptimizerOptions.

* Fix merge bugs.

* Update mmpose.md (#224)

* [Dostring]add example in apis docstring (#214)

* add example in apis docstring

* add backend example in docstring

* rm blank line

* Fixed get_mo_options_from_cfg args

* fix l2norm test

Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>

* [Enhancement] Switch to statically typed Value::Any (#209)

* replace std::any with StaticAny

* fix __compare_typeid

* remove fallback id support

* constraint on traits::TypeId<T>::value

* fix includes

* [Enhancement] TensorRT DCN support (#205)

* add tensorrt dcn support

* fix lint

* remove roi_align plugin for ORT (#258)

* remove roi_align plugin

* remove ut

* skip single_roi_extractor UT for ORT in CI

* move align to symbolic and update docs

* recover UT

* resolve comments

* [Enhancement]: Support fcn_unet deployment with dynamic shape (#251)

* support mmseg fcn+unet dynamic shape

* add test

* fix ci

* fix units

* resolve comments

* [Enhancement] fix-cmake-relocatable (#223)

* require user to specify xxx_dir

* fix line ending

* fix end-of-file-fixer

* try to fix ld cudart cublas

* add ENV var search

* fix CMAKE_CUDA_COMPILER

* cpu, cuda should all work well

* remove commented code

* fix ncnn example find ncnn package (#282)

* table format is wrong (#283)

* update pre-commit (#284)

* update pre-commit

* fix clang-format

* fix mmseg config (#281)

* fix mmseg config

* fix mmpose evaluate outputs

* fix lint

* update pre-commit config

* fix lint

* Revert "update pre-commit config"

This reverts commit c3fd71611f0b79dfa9ad73fc0f4555c1b3563665.

* miss code symbol (#296)

* refactor cmake build (#295)

* add-mmpose-sdk (#259)

* Torchscript support (#159)

* support torchscript

* add nms

* add torchscript configs and update deploy process and dump-info

* typescript -> torchscript

* add torchscript custom extension support

* add ts custom ops again

* support mmseg unet

* [WIP] add optimizer for torchscript (#119)

* add passes

* add python api

* Torchscript optimizer python api (#121)

* add passes

* add python api

* use python api instead of executable

* Merge Master, update optimizer (#151)

* [Feature] add yolox ncnn (#29)

* add yolox ncnn

* add ncnn android performance of yolox

* add ut

* fix lint

* fix None bugs for ncnn

* test codecov

* test codecov

* add device

* fix yapf

* remove if-else for img shape

* use channelshuffle optimize

* change benchmark after channelshuffle

* fix yapf

* fix yapf

* fuse continuous reshape

* fix static shape deploy

* fix code

* drop pad

* only static shape

* fix static

* fix docstring

* Added mask overlay to output image, changed fprintf info messages to … (#55)

* Added mask overlay to output image, changed fprintf info messages to stdout

* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds

* clang-format

* Support UNet in mmseg (#77)

* Repeatdataset in train has no CLASSES & PALETTE

* update result for unet

* update docstring for mmdet

* remove ppl for unet in docs

* fix ort wrap about input type (#81)

* Fix memleak (#86)

* delete []

* fix build error when enble MMDEPLOY_ACTIVE_LEVEL

* fix lint

* [Doc] Nano benchmark and tutorial (#71)

* add cls benchmark

* add nano zh-cn benchmark and en tutorial

* add device row

* add doc path to index.rst

* fix typo

* [Fix] fix missing deploy_core (#80)

* fix missing deploy_core

* mv flag to demo

* target link

* [Docs] Fix links in Chinese doc (#84)

* Fix docs in Chinese link

* Fix links

* Delete symbolic link and add links to html

* delete files

* Fix link

* [Feature] Add docker files (#67)

* add gpu and cpu dockerfile

* fix lint

* fix cpu docker and remove redundant

* use pip instead

* add build arg and readme

* fix grammar

* update readme

* add chinese doc for dockerfile and add docker build to build.md

* grammar

* refine dockerfiles

* add FAQs

* update Dpplcv_DIR for SDK building

* remove mmcls

* add sdk demos

* fix typo and lint

* update FAQs

* [Fix]fix check_env (#101)

* fix check_env

* update

* Replace convert_syncbatchnorm in mmseg (#93)

* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv

* change logger

* [Doc] Update FAQ for TensorRT (#96)

* update FAQ

* comment

* [Docs]: Update doc for openvino installation (#102)

* fix docs

* fix docs

* fix docs

* fix mmcv version

* fix docs

* rm blank line

* simplify non batch nms (#99)

* [Enhacement] Allow test.py to save evaluation results (#108)

* Add log file

* Delete debug code

* Rename logger

* resolve comments

* [Enhancement] Support mmocr v0.4+ (#115)

* support mmocr v0.4+

* 0.4.0 -> 0.4.1

* fix onnxruntime wrapper for gpu inference (#123)

* fix ncnn wrapper for ort-gpu

* resolve comment

* fix lint

* Fix typo (#132)

* lock mmcls version (#131)

* [Enhancement] upgrade isort in pre-commit config (#141)

* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87

* fix lint

* remove .isort.cfg and put its known_third_party to setup.cfg

* Fix ci for mmocr (#144)

* fix mmocr unittests

* remove useless

* lock mmdet maximum version to 2.20

* pip install -U numpy

* Fix capture_output (#125)

Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>

* configs for all tasks

* use torchvision roi align

* remote unnecessary code

* fix ut

* fix ut

* export

* det dynamic

* det dynamic

* add ut

* fix ut

* add ut and docs

* fix ut

* skip torchscript ut if no ops available

* add torchscript option to build.md

* update benchmark and resolve comments

* resolve conflicts

* rename configs

* fix mrcnn cuda test

* remove useless

* add version requirements to docs and comments to codes

* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN

* rebase

* update example for torchscript.md

* update FAQs for torchscript.md

* resolve comments

* only use torchvision roi_align for torchscript

* fix ut

* use torchvision roi align when pool model is avg

* resolve comments

Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>

* Update supported mmseg models (#181)

* fix ocrnet cascade decoder

* update mmseg support models

* update mmseg configs

* support emanet and icnet

* set max K of TopK for tensorrt

* update supported models for mmseg in docs

* add test for emamodule

* add configs and update docs

* Update docs

* update benchmark

* [Features]Support mmdet3d (#103)

* add mmdet3d code

* add code

* update code

* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model

* add tensorrt config

* fix config

* update

* support for tensorrt

* add config

* fix config`

* fix apis about torch2onnx

* update

* mmdet3d deploy version1.0

* map is ok

* fix code

* version1.0

* fix code

* fix visual

* fix bug

* tensorrt support success

* add docstring

* add docs

* fix docs

* fix comments

* fix comment

* fix comment

* fix openvino wrapper

* add unit test

* fix device about cpu

* fix comment

* fix show_result

* fix lint

* fix requirments

* remove ci about det3d

* fix ut

* add ut data

* support for new version pointpillars

* fix comment

* fix support_list

* fix comments

* fix config name

* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)

* Add mo args.

* [Docs]: update docs and argument descriptions (#196)

* bump version to v0.4.0

* update docs and argument descriptions

* revert version change

* fix unnecessary change of config for dynamic exportation (#199)

* fix mmcls get classes (#215)

* fix mmcls get classes

* resolve comment

* resolve comment

* Add ModelOptimizerOptions.

* Fix merge bugs.

* Update mmpose.md (#224)

* [Dostring]add example in apis docstring (#214)

* add example in apis docstring

* add backend example in docstring

* rm blank line

* Fixed get_mo_options_from_cfg args

* fix l2norm test

Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>

* add-mmpose-codebase

* fix ci

* fix img_shape after TopDownAffine

* rename TopDown module -> XheadDecode & implement regression decode

* align keypoints_from_heatmap

* remove hardcode keypoint_head, need refactor, current only support topdown config

* add mmpose python api

* update mmpose-python code

* can't clip fake box

* fix rebase error

* fix rebase error

* link mspn decoder to base decoder

* fix ci

* compile with gcc7.5

* remove no use code

* fix

* fix prompt

* remove unnecessary cv::parallel_for_

* rewrite TopdownHeatmapMultiStageHead.inference_model

* add comment

* add more detail docstring why use _cs2xyxy in sdk backend

* fix Registry name

* remove no use param & add comment of output result

Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>

* update faq about WinError 1455 (#297)

* update faq about WinError 1455

* Update faq.md

* Update faq.md

* fix ci

Co-authored-by: chenxin2 <chenxin2@sensetime.com>

* [Feature]Support centerpoint (#252)

* bump version to v0.4.0

* [Enhancement] Make rewriter more powerful (#150)

* Finish function tests

* lint

* resolve comments

* Fix tests

* docstring & fix

* Complement informations

* lint

* Add example

* Fix version

* Remove todo

Co-authored-by: RunningLeon <mnsheng@yeah.net>

* Torchscript support (#159)

* support torchscript

* add nms

* add torchscript configs and update deploy process and dump-info

* typescript -> torchscript

* add torchscript custom extension support

* add ts custom ops again

* support mmseg unet

* [WIP] add optimizer for torchscript (#119)

* add passes

* add python api

* Torchscript optimizer python api (#121)

* add passes

* add python api

* use python api instead of executable

* Merge Master, update optimizer (#151)

* [Feature] add yolox ncnn (#29)

* add yolox ncnn

* add ncnn android performance of yolox

* add ut

* fix lint

* fix None bugs for ncnn

* test codecov

* test codecov

* add device

* fix yapf

* remove if-else for img shape

* use channelshuffle optimize

* change benchmark after channelshuffle

* fix yapf

* fix yapf

* fuse continuous reshape

* fix static shape deploy

* fix code

* drop pad

* only static shape

* fix static

* fix docstring

* Added mask overlay to output image, changed fprintf info messages to … (#55)

* Added mask overlay to output image, changed fprintf info messages to stdout

* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds

* clang-format

* Support UNet in mmseg (#77)

* Repeatdataset in train has no CLASSES & PALETTE

* update result for unet

* update docstring for mmdet

* remove ppl for unet in docs

* fix ort wrap about input type (#81)

* Fix memleak (#86)

* delete []

* fix build error when enble MMDEPLOY_ACTIVE_LEVEL

* fix lint

* [Doc] Nano benchmark and tutorial (#71)

* add cls benchmark

* add nano zh-cn benchmark and en tutorial

* add device row

* add doc path to index.rst

* fix typo

* [Fix] fix missing deploy_core (#80)

* fix missing deploy_core

* mv flag to demo

* target link

* [Docs] Fix links in Chinese doc (#84)

* Fix docs in Chinese link

* Fix links

* Delete symbolic link and add links to html

* delete files

* Fix link

* [Feature] Add docker files (#67)

* add gpu and cpu dockerfile

* fix lint

* fix cpu docker and remove redundant

* use pip instead

* add build arg and readme

* fix grammar

* update readme

* add chinese doc for dockerfile and add docker build to build.md

* grammar

* refine dockerfiles

* add FAQs

* update Dpplcv_DIR for SDK building

* remove mmcls

* add sdk demos

* fix typo and lint

* update FAQs

* [Fix]fix check_env (#101)

* fix check_env

* update

* Replace convert_syncbatchnorm in mmseg (#93)

* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv

* change logger

* [Doc] Update FAQ for TensorRT (#96)

* update FAQ

* comment

* [Docs]: Update doc for openvino installation (#102)

* fix docs

* fix docs

* fix docs

* fix mmcv version

* fix docs

* rm blank line

* simplify non batch nms (#99)

* [Enhacement] Allow test.py to save evaluation results (#108)

* Add log file

* Delete debug code

* Rename logger

* resolve comments

* [Enhancement] Support mmocr v0.4+ (#115)

* support mmocr v0.4+

* 0.4.0 -> 0.4.1

* fix onnxruntime wrapper for gpu inference (#123)

* fix ncnn wrapper for ort-gpu

* resolve comment

* fix lint

* Fix typo (#132)

* lock mmcls version (#131)

* [Enhancement] upgrade isort in pre-commit config (#141)

* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87

* fix lint

* remove .isort.cfg and put its known_third_party to setup.cfg

* Fix ci for mmocr (#144)

* fix mmocr unittests

* remove useless

* lock mmdet maximum version to 2.20

* pip install -U numpy

* Fix capture_output (#125)

Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>

* configs for all tasks

* use torchvision roi align

* remote unnecessary code

* fix ut

* fix ut

* export

* det dynamic

* det dynamic

* add ut

* fix ut

* add ut and docs

* fix ut

* skip torchscript ut if no ops available

* add torchscript option to build.md

* update benchmark and resolve comments

* resolve conflicts

* rename configs

* fix mrcnn cuda test

* remove useless

* add version requirements to docs and comments to codes

* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN

* rebase

* update example for torchscript.md

* update FAQs for torchscript.md

* resolve comments

* only use torchvision roi_align for torchscript

* fix ut

* use torchvision roi align when pool model is avg

* resolve comments

Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>

* Update supported mmseg models (#181)

* fix ocrnet cascade decoder

* update mmseg support models

* update mmseg configs

* support emanet and icnet

* set max K of TopK for tensorrt

* update supported models for mmseg in docs

* add test for emamodule

* add configs and update docs

* Update docs

* update benchmark

* [Features]Support mmdet3d (#103)

* add mmdet3d code

* add code

* update code

* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model

* add tensorrt config

* fix config

* update

* support for tensorrt

* add config

* fix config`

* fix apis about torch2onnx

* update

* mmdet3d deploy version1.0

* map is ok

* fix code

* version1.0

* fix code

* fix visual

* fix bug

* tensorrt support success

* add docstring

* add docs

* fix docs

* fix comments

* fix comment

* fix comment

* fix openvino wrapper

* add unit test

* fix device about cpu

* fix comment

* fix show_result

* fix lint

* fix requirments

* remove ci about det3d

* fix ut

* add ut data

* support for new version pointpillars

* fix comment

* fix support_list

* fix comments

* fix config name

* [Enhancement] Update pad logic in detection heads (#168)

* pad with register

* fix lint

Co-authored-by: AllentDan <dongchunyu@sensetime.com>

* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)

* Add mo args.

* [Docs]: update docs and argument descriptions (#196)

* bump version to v0.4.0

* update docs and argument descriptions

* revert version change

* fix unnecessary change of config for dynamic exportation (#199)

* fix mmcls get classes (#215)

* fix mmcls get classes

* resolve comment

* resolve comment

* Add ModelOptimizerOptions.

* Fix merge bugs.

* Update mmpose.md (#224)

* [Dostring]add example in apis docstring (#214)

* add example in apis docstring

* add backend example in docstring

* rm blank line

* Fixed get_mo_options_from_cfg args

* fix l2norm test

Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>

* [Enhancement] Switch to statically typed Value::Any (#209)

* replace std::any with StaticAny

* fix __compare_typeid

* remove fallback id support

* constraint on traits::TypeId<T>::value

* fix includes

* support for centerpoint

* [Enhancement] TensorRT DCN support (#205)

* add tensorrt dcn support

* fix lint

* add docstring and dcn model support

* add centerpoint ut and docs

* add config and fix input rank

* fix merge error

* fix a bug

* fix comment

* [Doc] update benchmark add supported-model-list (#286)

* update benchmark add supported-model-list

* fix lint

* fix lint

* loc mmocr maximum version

* fix ut

Co-authored-by: maningsheng <mnsheng@yeah.net>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: lzhangzz <lzhang329@gmail.com>

Co-authored-by: maningsheng <mnsheng@yeah.net>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: lzhangzz <lzhang329@gmail.com>
Co-authored-by: Chen Xin <xinchen.tju@gmail.com>
Co-authored-by: chenxin2 <chenxin2@sensetime.com>
2022-04-01 18:14:23 +08:00

180 lines
6.2 KiB
Python

# Copyright (c) OpenMMLab. All rights reserved.
import subprocess
import tempfile
import pytest
import torch
import torch.nn as nn
from mmdeploy.utils.constants import Backend
from mmdeploy.utils.test import check_backend
onnx_file = tempfile.NamedTemporaryFile(suffix='.onnx').name
ts_file = tempfile.NamedTemporaryFile(suffix='.pt').name
test_img = torch.rand(1, 3, 8, 8)
output_names = ['output']
input_names = ['input']
@pytest.mark.skip(reason='This a not test class but a utility class.')
class TestModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x + test_img
model = TestModel().eval()
@pytest.fixture(autouse=True, scope='module')
def generate_onnx_file():
with torch.no_grad():
torch.onnx.export(
model,
test_img,
onnx_file,
output_names=output_names,
input_names=input_names,
keep_initializers_as_inputs=True,
do_constant_folding=True,
verbose=False,
opset_version=11,
dynamic_axes=None)
@pytest.fixture(autouse=True, scope='module')
def generate_torchscript_file():
import mmcv
from mmdeploy.apis import torch2torchscript_impl
deploy_cfg = mmcv.Config(
{'backend_config': dict(type=Backend.TORCHSCRIPT.value)})
with torch.no_grad():
torch2torchscript_impl(model, torch.rand(1, 3, 8, 8), deploy_cfg,
ts_file)
def onnx2backend(backend, onnx_file):
if backend == Backend.TENSORRT:
from mmdeploy.backend.tensorrt import (create_trt_engine,
save_trt_engine)
backend_file = tempfile.NamedTemporaryFile(suffix='.engine').name
engine = create_trt_engine(
onnx_file, {
'input': {
'min_shape': [1, 3, 8, 8],
'opt_shape': [1, 3, 8, 8],
'max_shape': [1, 3, 8, 8]
}
})
save_trt_engine(engine, backend_file)
return backend_file
elif backend == Backend.ONNXRUNTIME:
return onnx_file
elif backend == Backend.PPLNN:
from mmdeploy.apis.pplnn import onnx2pplnn
algo_file = tempfile.NamedTemporaryFile(suffix='.json').name
onnx2pplnn(algo_file=algo_file, onnx_model=onnx_file)
return onnx_file, algo_file
elif backend == Backend.NCNN:
from mmdeploy.backend.ncnn.init_plugins import get_onnx2ncnn_path
onnx2ncnn_path = get_onnx2ncnn_path()
param_file = tempfile.NamedTemporaryFile(suffix='.param').name
bin_file = tempfile.NamedTemporaryFile(suffix='.bin').name
subprocess.call([onnx2ncnn_path, onnx_file, param_file, bin_file])
return param_file, bin_file
elif backend == Backend.OPENVINO:
from mmdeploy.apis.openvino import get_output_model_file, onnx2openvino
backend_dir = tempfile.TemporaryDirectory().name
backend_file = get_output_model_file(onnx_file, backend_dir)
input_info = {'input': test_img.shape}
output_names = ['output']
work_dir = backend_dir
onnx2openvino(input_info, output_names, onnx_file, work_dir)
return backend_file
def create_wrapper(backend, model_files):
if backend == Backend.TENSORRT:
from mmdeploy.backend.tensorrt import TRTWrapper
trt_model = TRTWrapper(model_files, output_names)
return trt_model
elif backend == Backend.ONNXRUNTIME:
from mmdeploy.backend.onnxruntime import ORTWrapper
ort_model = ORTWrapper(model_files, 'cpu', output_names)
return ort_model
elif backend == Backend.PPLNN:
from mmdeploy.backend.pplnn import PPLNNWrapper
onnx_file, algo_file = model_files
pplnn_model = PPLNNWrapper(onnx_file, algo_file, 'cpu', output_names)
return pplnn_model
elif backend == Backend.NCNN:
from mmdeploy.backend.ncnn import NCNNWrapper
param_file, bin_file = model_files
ncnn_model = NCNNWrapper(param_file, bin_file, output_names)
return ncnn_model
elif backend == Backend.OPENVINO:
from mmdeploy.backend.openvino import OpenVINOWrapper
openvino_model = OpenVINOWrapper(model_files, output_names)
return openvino_model
elif backend == Backend.TORCHSCRIPT:
from mmdeploy.backend.torchscript import TorchscriptWrapper
torchscript_model = TorchscriptWrapper(
model_files, input_names=input_names, output_names=output_names)
return torchscript_model
else:
raise NotImplementedError(f'Unknown backend type: {backend.value}')
def run_wrapper(backend, wrapper, input):
if backend == Backend.TENSORRT:
input = input.cuda()
results = wrapper({'input': input})['output']
results = results.detach().cpu()
return results
elif backend == Backend.ONNXRUNTIME:
results = wrapper({'input': input})['output']
results = results.detach().cpu()
return results
elif backend == Backend.PPLNN:
results = wrapper({'input': input})['output']
results = results.detach().cpu()
return results
elif backend == Backend.NCNN:
input = input.float()
results = wrapper({'input': input})['output']
results = results.detach().cpu()
return results
elif backend == Backend.OPENVINO:
results = wrapper({'input': input})['output']
results = results.detach().cpu()
return results
elif backend == Backend.TORCHSCRIPT:
results = wrapper({'input': input})['output']
return results
else:
raise NotImplementedError(f'Unknown backend type: {backend.value}')
ALL_BACKEND = [
Backend.TENSORRT, Backend.ONNXRUNTIME, Backend.PPLNN, Backend.NCNN,
Backend.OPENVINO, Backend.TORCHSCRIPT
]
@pytest.mark.parametrize('backend', ALL_BACKEND)
def test_wrapper(backend):
check_backend(backend, True)
if backend == Backend.TORCHSCRIPT:
model_files = ts_file
else:
model_files = onnx2backend(backend, onnx_file)
assert model_files is not None
wrapper = create_wrapper(backend, model_files)
assert wrapper is not None
results = run_wrapper(backend, wrapper, test_img)
assert results is not None