Dev v0.4.0 (#301)
* bump version to v0.4.0
* [Enhancement] Make rewriter more powerful (#150)
* Finish function tests
* lint
* resolve comments
* Fix tests
* docstring & fix
* Complement informations
* lint
* Add example
* Fix version
* Remove todo
Co-authored-by: RunningLeon <mnsheng@yeah.net>
* Torchscript support (#159)
* support torchscript
* add nms
* add torchscript configs and update deploy process and dump-info
* typescript -> torchscript
* add torchscript custom extension support
* add ts custom ops again
* support mmseg unet
* [WIP] add optimizer for torchscript (#119)
* add passes
* add python api
* Torchscript optimizer python api (#121)
* add passes
* add python api
* use python api instead of executable
* Merge Master, update optimizer (#151)
* [Feature] add yolox ncnn (#29)
* add yolox ncnn
* add ncnn android performance of yolox
* add ut
* fix lint
* fix None bugs for ncnn
* test codecov
* test codecov
* add device
* fix yapf
* remove if-else for img shape
* use channelshuffle optimize
* change benchmark after channelshuffle
* fix yapf
* fix yapf
* fuse continuous reshape
* fix static shape deploy
* fix code
* drop pad
* only static shape
* fix static
* fix docstring
* Added mask overlay to output image, changed fprintf info messages to … (#55)
* Added mask overlay to output image, changed fprintf info messages to stdout
* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds
* clang-format
* Support UNet in mmseg (#77)
* Repeatdataset in train has no CLASSES & PALETTE
* update result for unet
* update docstring for mmdet
* remove ppl for unet in docs
* fix ort wrap about input type (#81)
* Fix memleak (#86)
* delete []
* fix build error when enble MMDEPLOY_ACTIVE_LEVEL
* fix lint
* [Doc] Nano benchmark and tutorial (#71)
* add cls benchmark
* add nano zh-cn benchmark and en tutorial
* add device row
* add doc path to index.rst
* fix typo
* [Fix] fix missing deploy_core (#80)
* fix missing deploy_core
* mv flag to demo
* target link
* [Docs] Fix links in Chinese doc (#84)
* Fix docs in Chinese link
* Fix links
* Delete symbolic link and add links to html
* delete files
* Fix link
* [Feature] Add docker files (#67)
* add gpu and cpu dockerfile
* fix lint
* fix cpu docker and remove redundant
* use pip instead
* add build arg and readme
* fix grammar
* update readme
* add chinese doc for dockerfile and add docker build to build.md
* grammar
* refine dockerfiles
* add FAQs
* update Dpplcv_DIR for SDK building
* remove mmcls
* add sdk demos
* fix typo and lint
* update FAQs
* [Fix]fix check_env (#101)
* fix check_env
* update
* Replace convert_syncbatchnorm in mmseg (#93)
* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv
* change logger
* [Doc] Update FAQ for TensorRT (#96)
* update FAQ
* comment
* [Docs]: Update doc for openvino installation (#102)
* fix docs
* fix docs
* fix docs
* fix mmcv version
* fix docs
* rm blank line
* simplify non batch nms (#99)
* [Enhacement] Allow test.py to save evaluation results (#108)
* Add log file
* Delete debug code
* Rename logger
* resolve comments
* [Enhancement] Support mmocr v0.4+ (#115)
* support mmocr v0.4+
* 0.4.0 -> 0.4.1
* fix onnxruntime wrapper for gpu inference (#123)
* fix ncnn wrapper for ort-gpu
* resolve comment
* fix lint
* Fix typo (#132)
* lock mmcls version (#131)
* [Enhancement] upgrade isort in pre-commit config (#141)
* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87
* fix lint
* remove .isort.cfg and put its known_third_party to setup.cfg
* Fix ci for mmocr (#144)
* fix mmocr unittests
* remove useless
* lock mmdet maximum version to 2.20
* pip install -U numpy
* Fix capture_output (#125)
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* configs for all tasks
* use torchvision roi align
* remote unnecessary code
* fix ut
* fix ut
* export
* det dynamic
* det dynamic
* add ut
* fix ut
* add ut and docs
* fix ut
* skip torchscript ut if no ops available
* add torchscript option to build.md
* update benchmark and resolve comments
* resolve conflicts
* rename configs
* fix mrcnn cuda test
* remove useless
* add version requirements to docs and comments to codes
* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN
* rebase
* update example for torchscript.md
* update FAQs for torchscript.md
* resolve comments
* only use torchvision roi_align for torchscript
* fix ut
* use torchvision roi align when pool model is avg
* resolve comments
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* Update supported mmseg models (#181)
* fix ocrnet cascade decoder
* update mmseg support models
* update mmseg configs
* support emanet and icnet
* set max K of TopK for tensorrt
* update supported models for mmseg in docs
* add test for emamodule
* add configs and update docs
* Update docs
* update benchmark
* [Features]Support mmdet3d (#103)
* add mmdet3d code
* add code
* update code
* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model
* add tensorrt config
* fix config
* update
* support for tensorrt
* add config
* fix config`
* fix apis about torch2onnx
* update
* mmdet3d deploy version1.0
* map is ok
* fix code
* version1.0
* fix code
* fix visual
* fix bug
* tensorrt support success
* add docstring
* add docs
* fix docs
* fix comments
* fix comment
* fix comment
* fix openvino wrapper
* add unit test
* fix device about cpu
* fix comment
* fix show_result
* fix lint
* fix requirments
* remove ci about det3d
* fix ut
* add ut data
* support for new version pointpillars
* fix comment
* fix support_list
* fix comments
* fix config name
* [Enhancement] Update pad logic in detection heads (#168)
* pad with register
* fix lint
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)
* Add mo args.
* [Docs]: update docs and argument descriptions (#196)
* bump version to v0.4.0
* update docs and argument descriptions
* revert version change
* fix unnecessary change of config for dynamic exportation (#199)
* fix mmcls get classes (#215)
* fix mmcls get classes
* resolve comment
* resolve comment
* Add ModelOptimizerOptions.
* Fix merge bugs.
* Update mmpose.md (#224)
* [Dostring]add example in apis docstring (#214)
* add example in apis docstring
* add backend example in docstring
* rm blank line
* Fixed get_mo_options_from_cfg args
* fix l2norm test
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
* [Enhancement] Switch to statically typed Value::Any (#209)
* replace std::any with StaticAny
* fix __compare_typeid
* remove fallback id support
* constraint on traits::TypeId<T>::value
* fix includes
* [Enhancement] TensorRT DCN support (#205)
* add tensorrt dcn support
* fix lint
* remove roi_align plugin for ORT (#258)
* remove roi_align plugin
* remove ut
* skip single_roi_extractor UT for ORT in CI
* move align to symbolic and update docs
* recover UT
* resolve comments
* [Enhancement]: Support fcn_unet deployment with dynamic shape (#251)
* support mmseg fcn+unet dynamic shape
* add test
* fix ci
* fix units
* resolve comments
* [Enhancement] fix-cmake-relocatable (#223)
* require user to specify xxx_dir
* fix line ending
* fix end-of-file-fixer
* try to fix ld cudart cublas
* add ENV var search
* fix CMAKE_CUDA_COMPILER
* cpu, cuda should all work well
* remove commented code
* fix ncnn example find ncnn package (#282)
* table format is wrong (#283)
* update pre-commit (#284)
* update pre-commit
* fix clang-format
* fix mmseg config (#281)
* fix mmseg config
* fix mmpose evaluate outputs
* fix lint
* update pre-commit config
* fix lint
* Revert "update pre-commit config"
This reverts commit c3fd71611f0b79dfa9ad73fc0f4555c1b3563665.
* miss code symbol (#296)
* refactor cmake build (#295)
* add-mmpose-sdk (#259)
* Torchscript support (#159)
* support torchscript
* add nms
* add torchscript configs and update deploy process and dump-info
* typescript -> torchscript
* add torchscript custom extension support
* add ts custom ops again
* support mmseg unet
* [WIP] add optimizer for torchscript (#119)
* add passes
* add python api
* Torchscript optimizer python api (#121)
* add passes
* add python api
* use python api instead of executable
* Merge Master, update optimizer (#151)
* [Feature] add yolox ncnn (#29)
* add yolox ncnn
* add ncnn android performance of yolox
* add ut
* fix lint
* fix None bugs for ncnn
* test codecov
* test codecov
* add device
* fix yapf
* remove if-else for img shape
* use channelshuffle optimize
* change benchmark after channelshuffle
* fix yapf
* fix yapf
* fuse continuous reshape
* fix static shape deploy
* fix code
* drop pad
* only static shape
* fix static
* fix docstring
* Added mask overlay to output image, changed fprintf info messages to … (#55)
* Added mask overlay to output image, changed fprintf info messages to stdout
* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds
* clang-format
* Support UNet in mmseg (#77)
* Repeatdataset in train has no CLASSES & PALETTE
* update result for unet
* update docstring for mmdet
* remove ppl for unet in docs
* fix ort wrap about input type (#81)
* Fix memleak (#86)
* delete []
* fix build error when enble MMDEPLOY_ACTIVE_LEVEL
* fix lint
* [Doc] Nano benchmark and tutorial (#71)
* add cls benchmark
* add nano zh-cn benchmark and en tutorial
* add device row
* add doc path to index.rst
* fix typo
* [Fix] fix missing deploy_core (#80)
* fix missing deploy_core
* mv flag to demo
* target link
* [Docs] Fix links in Chinese doc (#84)
* Fix docs in Chinese link
* Fix links
* Delete symbolic link and add links to html
* delete files
* Fix link
* [Feature] Add docker files (#67)
* add gpu and cpu dockerfile
* fix lint
* fix cpu docker and remove redundant
* use pip instead
* add build arg and readme
* fix grammar
* update readme
* add chinese doc for dockerfile and add docker build to build.md
* grammar
* refine dockerfiles
* add FAQs
* update Dpplcv_DIR for SDK building
* remove mmcls
* add sdk demos
* fix typo and lint
* update FAQs
* [Fix]fix check_env (#101)
* fix check_env
* update
* Replace convert_syncbatchnorm in mmseg (#93)
* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv
* change logger
* [Doc] Update FAQ for TensorRT (#96)
* update FAQ
* comment
* [Docs]: Update doc for openvino installation (#102)
* fix docs
* fix docs
* fix docs
* fix mmcv version
* fix docs
* rm blank line
* simplify non batch nms (#99)
* [Enhacement] Allow test.py to save evaluation results (#108)
* Add log file
* Delete debug code
* Rename logger
* resolve comments
* [Enhancement] Support mmocr v0.4+ (#115)
* support mmocr v0.4+
* 0.4.0 -> 0.4.1
* fix onnxruntime wrapper for gpu inference (#123)
* fix ncnn wrapper for ort-gpu
* resolve comment
* fix lint
* Fix typo (#132)
* lock mmcls version (#131)
* [Enhancement] upgrade isort in pre-commit config (#141)
* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87
* fix lint
* remove .isort.cfg and put its known_third_party to setup.cfg
* Fix ci for mmocr (#144)
* fix mmocr unittests
* remove useless
* lock mmdet maximum version to 2.20
* pip install -U numpy
* Fix capture_output (#125)
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* configs for all tasks
* use torchvision roi align
* remote unnecessary code
* fix ut
* fix ut
* export
* det dynamic
* det dynamic
* add ut
* fix ut
* add ut and docs
* fix ut
* skip torchscript ut if no ops available
* add torchscript option to build.md
* update benchmark and resolve comments
* resolve conflicts
* rename configs
* fix mrcnn cuda test
* remove useless
* add version requirements to docs and comments to codes
* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN
* rebase
* update example for torchscript.md
* update FAQs for torchscript.md
* resolve comments
* only use torchvision roi_align for torchscript
* fix ut
* use torchvision roi align when pool model is avg
* resolve comments
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* Update supported mmseg models (#181)
* fix ocrnet cascade decoder
* update mmseg support models
* update mmseg configs
* support emanet and icnet
* set max K of TopK for tensorrt
* update supported models for mmseg in docs
* add test for emamodule
* add configs and update docs
* Update docs
* update benchmark
* [Features]Support mmdet3d (#103)
* add mmdet3d code
* add code
* update code
* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model
* add tensorrt config
* fix config
* update
* support for tensorrt
* add config
* fix config`
* fix apis about torch2onnx
* update
* mmdet3d deploy version1.0
* map is ok
* fix code
* version1.0
* fix code
* fix visual
* fix bug
* tensorrt support success
* add docstring
* add docs
* fix docs
* fix comments
* fix comment
* fix comment
* fix openvino wrapper
* add unit test
* fix device about cpu
* fix comment
* fix show_result
* fix lint
* fix requirments
* remove ci about det3d
* fix ut
* add ut data
* support for new version pointpillars
* fix comment
* fix support_list
* fix comments
* fix config name
* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)
* Add mo args.
* [Docs]: update docs and argument descriptions (#196)
* bump version to v0.4.0
* update docs and argument descriptions
* revert version change
* fix unnecessary change of config for dynamic exportation (#199)
* fix mmcls get classes (#215)
* fix mmcls get classes
* resolve comment
* resolve comment
* Add ModelOptimizerOptions.
* Fix merge bugs.
* Update mmpose.md (#224)
* [Dostring]add example in apis docstring (#214)
* add example in apis docstring
* add backend example in docstring
* rm blank line
* Fixed get_mo_options_from_cfg args
* fix l2norm test
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
* add-mmpose-codebase
* fix ci
* fix img_shape after TopDownAffine
* rename TopDown module -> XheadDecode & implement regression decode
* align keypoints_from_heatmap
* remove hardcode keypoint_head, need refactor, current only support topdown config
* add mmpose python api
* update mmpose-python code
* can't clip fake box
* fix rebase error
* fix rebase error
* link mspn decoder to base decoder
* fix ci
* compile with gcc7.5
* remove no use code
* fix
* fix prompt
* remove unnecessary cv::parallel_for_
* rewrite TopdownHeatmapMultiStageHead.inference_model
* add comment
* add more detail docstring why use _cs2xyxy in sdk backend
* fix Registry name
* remove no use param & add comment of output result
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
* update faq about WinError 1455 (#297)
* update faq about WinError 1455
* Update faq.md
* Update faq.md
* fix ci
Co-authored-by: chenxin2 <chenxin2@sensetime.com>
* [Feature]Support centerpoint (#252)
* bump version to v0.4.0
* [Enhancement] Make rewriter more powerful (#150)
* Finish function tests
* lint
* resolve comments
* Fix tests
* docstring & fix
* Complement informations
* lint
* Add example
* Fix version
* Remove todo
Co-authored-by: RunningLeon <mnsheng@yeah.net>
* Torchscript support (#159)
* support torchscript
* add nms
* add torchscript configs and update deploy process and dump-info
* typescript -> torchscript
* add torchscript custom extension support
* add ts custom ops again
* support mmseg unet
* [WIP] add optimizer for torchscript (#119)
* add passes
* add python api
* Torchscript optimizer python api (#121)
* add passes
* add python api
* use python api instead of executable
* Merge Master, update optimizer (#151)
* [Feature] add yolox ncnn (#29)
* add yolox ncnn
* add ncnn android performance of yolox
* add ut
* fix lint
* fix None bugs for ncnn
* test codecov
* test codecov
* add device
* fix yapf
* remove if-else for img shape
* use channelshuffle optimize
* change benchmark after channelshuffle
* fix yapf
* fix yapf
* fuse continuous reshape
* fix static shape deploy
* fix code
* drop pad
* only static shape
* fix static
* fix docstring
* Added mask overlay to output image, changed fprintf info messages to … (#55)
* Added mask overlay to output image, changed fprintf info messages to stdout
* Improved box filtering (filter area/score), make sure roi coordinates stay within bounds
* clang-format
* Support UNet in mmseg (#77)
* Repeatdataset in train has no CLASSES & PALETTE
* update result for unet
* update docstring for mmdet
* remove ppl for unet in docs
* fix ort wrap about input type (#81)
* Fix memleak (#86)
* delete []
* fix build error when enble MMDEPLOY_ACTIVE_LEVEL
* fix lint
* [Doc] Nano benchmark and tutorial (#71)
* add cls benchmark
* add nano zh-cn benchmark and en tutorial
* add device row
* add doc path to index.rst
* fix typo
* [Fix] fix missing deploy_core (#80)
* fix missing deploy_core
* mv flag to demo
* target link
* [Docs] Fix links in Chinese doc (#84)
* Fix docs in Chinese link
* Fix links
* Delete symbolic link and add links to html
* delete files
* Fix link
* [Feature] Add docker files (#67)
* add gpu and cpu dockerfile
* fix lint
* fix cpu docker and remove redundant
* use pip instead
* add build arg and readme
* fix grammar
* update readme
* add chinese doc for dockerfile and add docker build to build.md
* grammar
* refine dockerfiles
* add FAQs
* update Dpplcv_DIR for SDK building
* remove mmcls
* add sdk demos
* fix typo and lint
* update FAQs
* [Fix]fix check_env (#101)
* fix check_env
* update
* Replace convert_syncbatchnorm in mmseg (#93)
* replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv
* change logger
* [Doc] Update FAQ for TensorRT (#96)
* update FAQ
* comment
* [Docs]: Update doc for openvino installation (#102)
* fix docs
* fix docs
* fix docs
* fix mmcv version
* fix docs
* rm blank line
* simplify non batch nms (#99)
* [Enhacement] Allow test.py to save evaluation results (#108)
* Add log file
* Delete debug code
* Rename logger
* resolve comments
* [Enhancement] Support mmocr v0.4+ (#115)
* support mmocr v0.4+
* 0.4.0 -> 0.4.1
* fix onnxruntime wrapper for gpu inference (#123)
* fix ncnn wrapper for ort-gpu
* resolve comment
* fix lint
* Fix typo (#132)
* lock mmcls version (#131)
* [Enhancement] upgrade isort in pre-commit config (#141)
* [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87
* fix lint
* remove .isort.cfg and put its known_third_party to setup.cfg
* Fix ci for mmocr (#144)
* fix mmocr unittests
* remove useless
* lock mmdet maximum version to 2.20
* pip install -U numpy
* Fix capture_output (#125)
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* configs for all tasks
* use torchvision roi align
* remote unnecessary code
* fix ut
* fix ut
* export
* det dynamic
* det dynamic
* add ut
* fix ut
* add ut and docs
* fix ut
* skip torchscript ut if no ops available
* add torchscript option to build.md
* update benchmark and resolve comments
* resolve conflicts
* rename configs
* fix mrcnn cuda test
* remove useless
* add version requirements to docs and comments to codes
* enable empty image exporting for torchscript and accelerate ORT inference for MRCNN
* rebase
* update example for torchscript.md
* update FAQs for torchscript.md
* resolve comments
* only use torchvision roi_align for torchscript
* fix ut
* use torchvision roi align when pool model is avg
* resolve comments
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
* Update supported mmseg models (#181)
* fix ocrnet cascade decoder
* update mmseg support models
* update mmseg configs
* support emanet and icnet
* set max K of TopK for tensorrt
* update supported models for mmseg in docs
* add test for emamodule
* add configs and update docs
* Update docs
* update benchmark
* [Features]Support mmdet3d (#103)
* add mmdet3d code
* add code
* update code
* [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model
* add tensorrt config
* fix config
* update
* support for tensorrt
* add config
* fix config`
* fix apis about torch2onnx
* update
* mmdet3d deploy version1.0
* map is ok
* fix code
* version1.0
* fix code
* fix visual
* fix bug
* tensorrt support success
* add docstring
* add docs
* fix docs
* fix comments
* fix comment
* fix comment
* fix openvino wrapper
* add unit test
* fix device about cpu
* fix comment
* fix show_result
* fix lint
* fix requirments
* remove ci about det3d
* fix ut
* add ut data
* support for new version pointpillars
* fix comment
* fix support_list
* fix comments
* fix config name
* [Enhancement] Update pad logic in detection heads (#168)
* pad with register
* fix lint
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
* [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178)
* Add mo args.
* [Docs]: update docs and argument descriptions (#196)
* bump version to v0.4.0
* update docs and argument descriptions
* revert version change
* fix unnecessary change of config for dynamic exportation (#199)
* fix mmcls get classes (#215)
* fix mmcls get classes
* resolve comment
* resolve comment
* Add ModelOptimizerOptions.
* Fix merge bugs.
* Update mmpose.md (#224)
* [Dostring]add example in apis docstring (#214)
* add example in apis docstring
* add backend example in docstring
* rm blank line
* Fixed get_mo_options_from_cfg args
* fix l2norm test
Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
* [Enhancement] Switch to statically typed Value::Any (#209)
* replace std::any with StaticAny
* fix __compare_typeid
* remove fallback id support
* constraint on traits::TypeId<T>::value
* fix includes
* support for centerpoint
* [Enhancement] TensorRT DCN support (#205)
* add tensorrt dcn support
* fix lint
* add docstring and dcn model support
* add centerpoint ut and docs
* add config and fix input rank
* fix merge error
* fix a bug
* fix comment
* [Doc] update benchmark add supported-model-list (#286)
* update benchmark add supported-model-list
* fix lint
* fix lint
* loc mmocr maximum version
* fix ut
Co-authored-by: maningsheng <mnsheng@yeah.net>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: lzhangzz <lzhang329@gmail.com>
Co-authored-by: maningsheng <mnsheng@yeah.net>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: grimoire <streetyao@live.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com>
Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com>
Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com>
Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com>
Co-authored-by: AllentDan <dongchunyu@sensetime.com>
Co-authored-by: Haofan Wang <frankmiracle@outlook.com>
Co-authored-by: lzhangzz <lzhang329@gmail.com>
Co-authored-by: Chen Xin <xinchen.tju@gmail.com>
Co-authored-by: chenxin2 <chenxin2@sensetime.com>
2022-04-01 18:14:23 +08:00
|
|
|
// Copyright (c) OpenMMLab. All rights reserved.
|
|
|
|
|
|
|
|
#include <set>
|
|
|
|
|
|
|
|
#include "archive/json_archive.h"
|
|
|
|
#include "archive/value_archive.h"
|
|
|
|
#include "core/registry.h"
|
|
|
|
#include "core/tensor.h"
|
|
|
|
#include "core/utils/device_utils.h"
|
|
|
|
#include "core/utils/formatter.h"
|
|
|
|
#include "opencv2/imgproc.hpp"
|
|
|
|
#include "opencv_utils.h"
|
|
|
|
#include "preprocess/transform/resize.h"
|
|
|
|
#include "preprocess/transform/transform.h"
|
|
|
|
|
|
|
|
using namespace std;
|
|
|
|
|
|
|
|
namespace mmdeploy {
|
|
|
|
|
|
|
|
cv::Point2f operator*(cv::Point2f a, cv::Point2f b) {
|
|
|
|
cv::Point2f c;
|
|
|
|
c.x = a.x * b.x;
|
|
|
|
c.y = a.y * b.y;
|
|
|
|
return c;
|
|
|
|
}
|
|
|
|
|
|
|
|
class TopDownAffineImpl : public Module {
|
|
|
|
public:
|
|
|
|
explicit TopDownAffineImpl(const Value& args) noexcept {
|
|
|
|
use_udp_ = args.value("use_udp", use_udp_);
|
|
|
|
backend_ = args.contains("backend") && args["backend"].is_string()
|
|
|
|
? args["backend"].get<string>()
|
|
|
|
: backend_;
|
|
|
|
stream_ = args["context"]["stream"].get<Stream>();
|
|
|
|
assert(args.contains("image_size"));
|
|
|
|
from_value(args["image_size"], image_size_);
|
|
|
|
}
|
|
|
|
|
|
|
|
~TopDownAffineImpl() override = default;
|
|
|
|
|
|
|
|
Result<Value> Process(const Value& input) override {
|
|
|
|
MMDEPLOY_DEBUG("top_down_affine input: {}", input);
|
|
|
|
|
|
|
|
Device host{"cpu"};
|
|
|
|
auto _img = input["img"].get<Tensor>();
|
|
|
|
OUTCOME_TRY(auto img, MakeAvailableOnDevice(_img, host, stream_));
|
|
|
|
stream_.Wait().value();
|
|
|
|
auto src = cpu::Tensor2CVMat(img);
|
|
|
|
|
|
|
|
// prepare data
|
|
|
|
vector<float> box;
|
|
|
|
from_value(input["box"], box);
|
|
|
|
vector<float> c; // center
|
|
|
|
vector<float> s; // scale
|
|
|
|
Box2cs(box, c, s);
|
|
|
|
auto r = input["rotation"].get<float>();
|
|
|
|
|
|
|
|
cv::Mat dst;
|
|
|
|
if (use_udp_) {
|
|
|
|
cv::Mat trans =
|
|
|
|
GetWarpMatrix(r, {c[0] * 2.f, c[1] * 2.f}, {image_size_[0] - 1.f, image_size_[1] - 1.f},
|
|
|
|
{s[0] * 200.f, s[1] * 200.f});
|
|
|
|
|
|
|
|
cv::warpAffine(src, dst, trans, {image_size_[0], image_size_[1]}, cv::INTER_LINEAR);
|
|
|
|
} else {
|
|
|
|
cv::Mat trans =
|
|
|
|
GetAffineTransform({c[0], c[1]}, {s[0], s[1]}, r, {image_size_[0], image_size_[1]});
|
|
|
|
cv::warpAffine(src, dst, trans, {image_size_[0], image_size_[1]}, cv::INTER_LINEAR);
|
|
|
|
}
|
|
|
|
|
|
|
|
Value output = input;
|
|
|
|
output["img"] = cpu::CVMat2Tensor(dst);
|
|
|
|
output["img_shape"] = {1, image_size_[1], image_size_[0], dst.channels()};
|
|
|
|
output["center"] = to_value(c);
|
|
|
|
output["scale"] = to_value(s);
|
|
|
|
MMDEPLOY_DEBUG("output: {}", to_json(output).dump(2));
|
|
|
|
return output;
|
|
|
|
}
|
|
|
|
|
|
|
|
void Box2cs(vector<float>& box, vector<float>& center, vector<float>& scale) {
|
|
|
|
float x = box[0];
|
|
|
|
float y = box[1];
|
|
|
|
float w = box[2];
|
|
|
|
float h = box[3];
|
|
|
|
float aspect_ratio = image_size_[0] * 1.0 / image_size_[1];
|
|
|
|
center.push_back(x + w * 0.5);
|
|
|
|
center.push_back(y + h * 0.5);
|
|
|
|
if (w > aspect_ratio * h) {
|
|
|
|
h = w * 1.0 / aspect_ratio;
|
|
|
|
} else if (w < aspect_ratio * h) {
|
|
|
|
w = h * aspect_ratio;
|
|
|
|
}
|
|
|
|
scale.push_back(w / 200 * 1.25);
|
|
|
|
scale.push_back(h / 200 * 1.25);
|
|
|
|
}
|
|
|
|
|
|
|
|
cv::Mat GetWarpMatrix(float theta, cv::Size2f size_input, cv::Size2f size_dst,
|
|
|
|
cv::Size2f size_target) {
|
|
|
|
theta = theta * 3.1415926 / 180;
|
|
|
|
float scale_x = size_dst.width / size_target.width;
|
|
|
|
float scale_y = size_dst.height / size_target.height;
|
|
|
|
cv::Mat matrix = cv::Mat(2, 3, CV_32FC1);
|
|
|
|
matrix.at<float>(0, 0) = std::cos(theta) * scale_x;
|
|
|
|
matrix.at<float>(0, 1) = -std::sin(theta) * scale_x;
|
|
|
|
matrix.at<float>(0, 2) =
|
|
|
|
scale_x * (-0.5f * size_input.width * std::cos(theta) +
|
|
|
|
0.5f * size_input.height * std::sin(theta) + 0.5f * size_target.width);
|
|
|
|
matrix.at<float>(1, 0) = std::sin(theta) * scale_y;
|
|
|
|
matrix.at<float>(1, 1) = std::cos(theta) * scale_y;
|
|
|
|
matrix.at<float>(1, 2) =
|
|
|
|
scale_y * (-0.5f * size_input.width * std::sin(theta) -
|
|
|
|
0.5f * size_input.height * std::cos(theta) + 0.5f * size_target.height);
|
|
|
|
return matrix;
|
|
|
|
}
|
|
|
|
|
|
|
|
cv::Mat GetAffineTransform(cv::Point2f center, cv::Point2f scale, float rot, cv::Size output_size,
|
|
|
|
cv::Point2f shift = {0.f, 0.f}, bool inv = false) {
|
|
|
|
cv::Point2f scale_tmp = scale * 200;
|
|
|
|
float src_w = scale_tmp.x;
|
|
|
|
int dst_w = output_size.width;
|
|
|
|
int dst_h = output_size.height;
|
|
|
|
float rot_rad = 3.1415926 * rot / 180;
|
|
|
|
cv::Point2f src_dir = rotate_point({0.f, src_w * -0.5f}, rot_rad);
|
|
|
|
cv::Point2f dst_dir = {0.f, dst_w * -0.5f};
|
|
|
|
|
|
|
|
cv::Point2f src_points[3];
|
|
|
|
src_points[0] = center + scale_tmp * shift;
|
|
|
|
src_points[1] = center + src_dir + scale_tmp * shift;
|
|
|
|
src_points[2] = Get3rdPoint(src_points[0], src_points[1]);
|
|
|
|
|
|
|
|
cv::Point2f dst_points[3];
|
|
|
|
dst_points[0] = {dst_w * 0.5f, dst_h * 0.5f};
|
|
|
|
dst_points[1] = dst_dir + cv::Point2f(dst_w * 0.5f, dst_h * 0.5f);
|
|
|
|
dst_points[2] = Get3rdPoint(dst_points[0], dst_points[1]);
|
|
|
|
|
|
|
|
cv::Mat trans = inv ? cv::getAffineTransform(dst_points, src_points)
|
|
|
|
: cv::getAffineTransform(src_points, dst_points);
|
|
|
|
return trans;
|
|
|
|
}
|
|
|
|
|
|
|
|
cv::Point2f rotate_point(cv::Point2f pt, float angle_rad) {
|
|
|
|
float sn = std::sin(angle_rad);
|
|
|
|
float cs = std::cos(angle_rad);
|
|
|
|
float new_x = pt.x * cs - pt.y * sn;
|
|
|
|
float new_y = pt.x * sn + pt.y * cs;
|
|
|
|
return {new_x, new_y};
|
|
|
|
}
|
|
|
|
|
|
|
|
cv::Point2f Get3rdPoint(cv::Point2f a, cv::Point2f b) {
|
|
|
|
cv::Point2f direction = a - b;
|
|
|
|
cv::Point2f third_pt = b + cv::Point2f(-direction.y, direction.x);
|
|
|
|
return third_pt;
|
|
|
|
}
|
|
|
|
|
|
|
|
protected:
|
|
|
|
bool use_udp_{false};
|
|
|
|
vector<int> image_size_;
|
|
|
|
std::string backend_;
|
|
|
|
Stream stream_;
|
|
|
|
};
|
|
|
|
|
|
|
|
class TopDownAffineImplCreator : public Creator<TopDownAffineImpl> {
|
|
|
|
public:
|
|
|
|
const char* GetName() const override { return "cpu"; }
|
|
|
|
int GetVersion() const override { return 1; }
|
|
|
|
ReturnType Create(const Value& args) override {
|
|
|
|
return std::make_unique<TopDownAffineImpl>(args);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
MMDEPLOY_DEFINE_REGISTRY(TopDownAffineImpl);
|
|
|
|
|
|
|
|
REGISTER_MODULE(TopDownAffineImpl, TopDownAffineImplCreator);
|
|
|
|
|
|
|
|
class TopDownAffine : public Transform {
|
|
|
|
public:
|
|
|
|
explicit TopDownAffine(const Value& args) : Transform(args) {
|
|
|
|
impl_ = Instantiate<TopDownAffineImpl>("TopDownAffine", args);
|
|
|
|
}
|
|
|
|
~TopDownAffine() override = default;
|
|
|
|
|
|
|
|
Result<Value> Process(const Value& input) override { return impl_->Process(input); }
|
|
|
|
|
|
|
|
private:
|
|
|
|
std::unique_ptr<TopDownAffineImpl> impl_;
|
|
|
|
static const std::string name_;
|
|
|
|
};
|
|
|
|
|
|
|
|
DECLARE_AND_REGISTER_MODULE(Transform, TopDownAffine, 1);
|
|
|
|
|
|
|
|
} // namespace mmdeploy
|