mmdeploy/csrc/core/mpl/static_any.h

490 lines
15 KiB
C
Raw Normal View History

Dev v0.4.0 (#301) * bump version to v0.4.0 * [Enhancement] Make rewriter more powerful (#150) * Finish function tests * lint * resolve comments * Fix tests * docstring & fix * Complement informations * lint * Add example * Fix version * Remove todo Co-authored-by: RunningLeon <mnsheng@yeah.net> * Torchscript support (#159) * support torchscript * add nms * add torchscript configs and update deploy process and dump-info * typescript -> torchscript * add torchscript custom extension support * add ts custom ops again * support mmseg unet * [WIP] add optimizer for torchscript (#119) * add passes * add python api * Torchscript optimizer python api (#121) * add passes * add python api * use python api instead of executable * Merge Master, update optimizer (#151) * [Feature] add yolox ncnn (#29) * add yolox ncnn * add ncnn android performance of yolox * add ut * fix lint * fix None bugs for ncnn * test codecov * test codecov * add device * fix yapf * remove if-else for img shape * use channelshuffle optimize * change benchmark after channelshuffle * fix yapf * fix yapf * fuse continuous reshape * fix static shape deploy * fix code * drop pad * only static shape * fix static * fix docstring * Added mask overlay to output image, changed fprintf info messages to … (#55) * Added mask overlay to output image, changed fprintf info messages to stdout * Improved box filtering (filter area/score), make sure roi coordinates stay within bounds * clang-format * Support UNet in mmseg (#77) * Repeatdataset in train has no CLASSES & PALETTE * update result for unet * update docstring for mmdet * remove ppl for unet in docs * fix ort wrap about input type (#81) * Fix memleak (#86) * delete [] * fix build error when enble MMDEPLOY_ACTIVE_LEVEL * fix lint * [Doc] Nano benchmark and tutorial (#71) * add cls benchmark * add nano zh-cn benchmark and en tutorial * add device row * add doc path to index.rst * fix typo * [Fix] fix missing deploy_core (#80) * fix missing deploy_core * mv flag to demo * target link * [Docs] Fix links in Chinese doc (#84) * Fix docs in Chinese link * Fix links * Delete symbolic link and add links to html * delete files * Fix link * [Feature] Add docker files (#67) * add gpu and cpu dockerfile * fix lint * fix cpu docker and remove redundant * use pip instead * add build arg and readme * fix grammar * update readme * add chinese doc for dockerfile and add docker build to build.md * grammar * refine dockerfiles * add FAQs * update Dpplcv_DIR for SDK building * remove mmcls * add sdk demos * fix typo and lint * update FAQs * [Fix]fix check_env (#101) * fix check_env * update * Replace convert_syncbatchnorm in mmseg (#93) * replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv * change logger * [Doc] Update FAQ for TensorRT (#96) * update FAQ * comment * [Docs]: Update doc for openvino installation (#102) * fix docs * fix docs * fix docs * fix mmcv version * fix docs * rm blank line * simplify non batch nms (#99) * [Enhacement] Allow test.py to save evaluation results (#108) * Add log file * Delete debug code * Rename logger * resolve comments * [Enhancement] Support mmocr v0.4+ (#115) * support mmocr v0.4+ * 0.4.0 -> 0.4.1 * fix onnxruntime wrapper for gpu inference (#123) * fix ncnn wrapper for ort-gpu * resolve comment * fix lint * Fix typo (#132) * lock mmcls version (#131) * [Enhancement] upgrade isort in pre-commit config (#141) * [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87 * fix lint * remove .isort.cfg and put its known_third_party to setup.cfg * Fix ci for mmocr (#144) * fix mmocr unittests * remove useless * lock mmdet maximum version to 2.20 * pip install -U numpy * Fix capture_output (#125) Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: lvhan028 <lvhan_028@163.com> Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com> Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com> * configs for all tasks * use torchvision roi align * remote unnecessary code * fix ut * fix ut * export * det dynamic * det dynamic * add ut * fix ut * add ut and docs * fix ut * skip torchscript ut if no ops available * add torchscript option to build.md * update benchmark and resolve comments * resolve conflicts * rename configs * fix mrcnn cuda test * remove useless * add version requirements to docs and comments to codes * enable empty image exporting for torchscript and accelerate ORT inference for MRCNN * rebase * update example for torchscript.md * update FAQs for torchscript.md * resolve comments * only use torchvision roi_align for torchscript * fix ut * use torchvision roi align when pool model is avg * resolve comments Co-authored-by: grimoire <streetyao@live.com> Co-authored-by: grimoire <yaoqian@sensetime.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: lvhan028 <lvhan_028@163.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com> Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com> * Update supported mmseg models (#181) * fix ocrnet cascade decoder * update mmseg support models * update mmseg configs * support emanet and icnet * set max K of TopK for tensorrt * update supported models for mmseg in docs * add test for emamodule * add configs and update docs * Update docs * update benchmark * [Features]Support mmdet3d (#103) * add mmdet3d code * add code * update code * [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model * add tensorrt config * fix config * update * support for tensorrt * add config * fix config` * fix apis about torch2onnx * update * mmdet3d deploy version1.0 * map is ok * fix code * version1.0 * fix code * fix visual * fix bug * tensorrt support success * add docstring * add docs * fix docs * fix comments * fix comment * fix comment * fix openvino wrapper * add unit test * fix device about cpu * fix comment * fix show_result * fix lint * fix requirments * remove ci about det3d * fix ut * add ut data * support for new version pointpillars * fix comment * fix support_list * fix comments * fix config name * [Enhancement] Update pad logic in detection heads (#168) * pad with register * fix lint Co-authored-by: AllentDan <dongchunyu@sensetime.com> * [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178) * Add mo args. * [Docs]: update docs and argument descriptions (#196) * bump version to v0.4.0 * update docs and argument descriptions * revert version change * fix unnecessary change of config for dynamic exportation (#199) * fix mmcls get classes (#215) * fix mmcls get classes * resolve comment * resolve comment * Add ModelOptimizerOptions. * Fix merge bugs. * Update mmpose.md (#224) * [Dostring]add example in apis docstring (#214) * add example in apis docstring * add backend example in docstring * rm blank line * Fixed get_mo_options_from_cfg args * fix l2norm test Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: Haofan Wang <frankmiracle@outlook.com> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: grimoire <yaoqian@sensetime.com> * [Enhancement] Switch to statically typed Value::Any (#209) * replace std::any with StaticAny * fix __compare_typeid * remove fallback id support * constraint on traits::TypeId<T>::value * fix includes * [Enhancement] TensorRT DCN support (#205) * add tensorrt dcn support * fix lint * remove roi_align plugin for ORT (#258) * remove roi_align plugin * remove ut * skip single_roi_extractor UT for ORT in CI * move align to symbolic and update docs * recover UT * resolve comments * [Enhancement]: Support fcn_unet deployment with dynamic shape (#251) * support mmseg fcn+unet dynamic shape * add test * fix ci * fix units * resolve comments * [Enhancement] fix-cmake-relocatable (#223) * require user to specify xxx_dir * fix line ending * fix end-of-file-fixer * try to fix ld cudart cublas * add ENV var search * fix CMAKE_CUDA_COMPILER * cpu, cuda should all work well * remove commented code * fix ncnn example find ncnn package (#282) * table format is wrong (#283) * update pre-commit (#284) * update pre-commit * fix clang-format * fix mmseg config (#281) * fix mmseg config * fix mmpose evaluate outputs * fix lint * update pre-commit config * fix lint * Revert "update pre-commit config" This reverts commit c3fd71611f0b79dfa9ad73fc0f4555c1b3563665. * miss code symbol (#296) * refactor cmake build (#295) * add-mmpose-sdk (#259) * Torchscript support (#159) * support torchscript * add nms * add torchscript configs and update deploy process and dump-info * typescript -> torchscript * add torchscript custom extension support * add ts custom ops again * support mmseg unet * [WIP] add optimizer for torchscript (#119) * add passes * add python api * Torchscript optimizer python api (#121) * add passes * add python api * use python api instead of executable * Merge Master, update optimizer (#151) * [Feature] add yolox ncnn (#29) * add yolox ncnn * add ncnn android performance of yolox * add ut * fix lint * fix None bugs for ncnn * test codecov * test codecov * add device * fix yapf * remove if-else for img shape * use channelshuffle optimize * change benchmark after channelshuffle * fix yapf * fix yapf * fuse continuous reshape * fix static shape deploy * fix code * drop pad * only static shape * fix static * fix docstring * Added mask overlay to output image, changed fprintf info messages to … (#55) * Added mask overlay to output image, changed fprintf info messages to stdout * Improved box filtering (filter area/score), make sure roi coordinates stay within bounds * clang-format * Support UNet in mmseg (#77) * Repeatdataset in train has no CLASSES & PALETTE * update result for unet * update docstring for mmdet * remove ppl for unet in docs * fix ort wrap about input type (#81) * Fix memleak (#86) * delete [] * fix build error when enble MMDEPLOY_ACTIVE_LEVEL * fix lint * [Doc] Nano benchmark and tutorial (#71) * add cls benchmark * add nano zh-cn benchmark and en tutorial * add device row * add doc path to index.rst * fix typo * [Fix] fix missing deploy_core (#80) * fix missing deploy_core * mv flag to demo * target link * [Docs] Fix links in Chinese doc (#84) * Fix docs in Chinese link * Fix links * Delete symbolic link and add links to html * delete files * Fix link * [Feature] Add docker files (#67) * add gpu and cpu dockerfile * fix lint * fix cpu docker and remove redundant * use pip instead * add build arg and readme * fix grammar * update readme * add chinese doc for dockerfile and add docker build to build.md * grammar * refine dockerfiles * add FAQs * update Dpplcv_DIR for SDK building * remove mmcls * add sdk demos * fix typo and lint * update FAQs * [Fix]fix check_env (#101) * fix check_env * update * Replace convert_syncbatchnorm in mmseg (#93) * replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv * change logger * [Doc] Update FAQ for TensorRT (#96) * update FAQ * comment * [Docs]: Update doc for openvino installation (#102) * fix docs * fix docs * fix docs * fix mmcv version * fix docs * rm blank line * simplify non batch nms (#99) * [Enhacement] Allow test.py to save evaluation results (#108) * Add log file * Delete debug code * Rename logger * resolve comments * [Enhancement] Support mmocr v0.4+ (#115) * support mmocr v0.4+ * 0.4.0 -> 0.4.1 * fix onnxruntime wrapper for gpu inference (#123) * fix ncnn wrapper for ort-gpu * resolve comment * fix lint * Fix typo (#132) * lock mmcls version (#131) * [Enhancement] upgrade isort in pre-commit config (#141) * [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87 * fix lint * remove .isort.cfg and put its known_third_party to setup.cfg * Fix ci for mmocr (#144) * fix mmocr unittests * remove useless * lock mmdet maximum version to 2.20 * pip install -U numpy * Fix capture_output (#125) Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: lvhan028 <lvhan_028@163.com> Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com> Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com> * configs for all tasks * use torchvision roi align * remote unnecessary code * fix ut * fix ut * export * det dynamic * det dynamic * add ut * fix ut * add ut and docs * fix ut * skip torchscript ut if no ops available * add torchscript option to build.md * update benchmark and resolve comments * resolve conflicts * rename configs * fix mrcnn cuda test * remove useless * add version requirements to docs and comments to codes * enable empty image exporting for torchscript and accelerate ORT inference for MRCNN * rebase * update example for torchscript.md * update FAQs for torchscript.md * resolve comments * only use torchvision roi_align for torchscript * fix ut * use torchvision roi align when pool model is avg * resolve comments Co-authored-by: grimoire <streetyao@live.com> Co-authored-by: grimoire <yaoqian@sensetime.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: lvhan028 <lvhan_028@163.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com> Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com> * Update supported mmseg models (#181) * fix ocrnet cascade decoder * update mmseg support models * update mmseg configs * support emanet and icnet * set max K of TopK for tensorrt * update supported models for mmseg in docs * add test for emamodule * add configs and update docs * Update docs * update benchmark * [Features]Support mmdet3d (#103) * add mmdet3d code * add code * update code * [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model * add tensorrt config * fix config * update * support for tensorrt * add config * fix config` * fix apis about torch2onnx * update * mmdet3d deploy version1.0 * map is ok * fix code * version1.0 * fix code * fix visual * fix bug * tensorrt support success * add docstring * add docs * fix docs * fix comments * fix comment * fix comment * fix openvino wrapper * add unit test * fix device about cpu * fix comment * fix show_result * fix lint * fix requirments * remove ci about det3d * fix ut * add ut data * support for new version pointpillars * fix comment * fix support_list * fix comments * fix config name * [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178) * Add mo args. * [Docs]: update docs and argument descriptions (#196) * bump version to v0.4.0 * update docs and argument descriptions * revert version change * fix unnecessary change of config for dynamic exportation (#199) * fix mmcls get classes (#215) * fix mmcls get classes * resolve comment * resolve comment * Add ModelOptimizerOptions. * Fix merge bugs. * Update mmpose.md (#224) * [Dostring]add example in apis docstring (#214) * add example in apis docstring * add backend example in docstring * rm blank line * Fixed get_mo_options_from_cfg args * fix l2norm test Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: Haofan Wang <frankmiracle@outlook.com> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: grimoire <yaoqian@sensetime.com> * add-mmpose-codebase * fix ci * fix img_shape after TopDownAffine * rename TopDown module -> XheadDecode & implement regression decode * align keypoints_from_heatmap * remove hardcode keypoint_head, need refactor, current only support topdown config * add mmpose python api * update mmpose-python code * can't clip fake box * fix rebase error * fix rebase error * link mspn decoder to base decoder * fix ci * compile with gcc7.5 * remove no use code * fix * fix prompt * remove unnecessary cv::parallel_for_ * rewrite TopdownHeatmapMultiStageHead.inference_model * add comment * add more detail docstring why use _cs2xyxy in sdk backend * fix Registry name * remove no use param & add comment of output result Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: grimoire <streetyao@live.com> Co-authored-by: grimoire <yaoqian@sensetime.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: lvhan028 <lvhan_028@163.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com> Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com> Co-authored-by: Haofan Wang <frankmiracle@outlook.com> * update faq about WinError 1455 (#297) * update faq about WinError 1455 * Update faq.md * Update faq.md * fix ci Co-authored-by: chenxin2 <chenxin2@sensetime.com> * [Feature]Support centerpoint (#252) * bump version to v0.4.0 * [Enhancement] Make rewriter more powerful (#150) * Finish function tests * lint * resolve comments * Fix tests * docstring & fix * Complement informations * lint * Add example * Fix version * Remove todo Co-authored-by: RunningLeon <mnsheng@yeah.net> * Torchscript support (#159) * support torchscript * add nms * add torchscript configs and update deploy process and dump-info * typescript -> torchscript * add torchscript custom extension support * add ts custom ops again * support mmseg unet * [WIP] add optimizer for torchscript (#119) * add passes * add python api * Torchscript optimizer python api (#121) * add passes * add python api * use python api instead of executable * Merge Master, update optimizer (#151) * [Feature] add yolox ncnn (#29) * add yolox ncnn * add ncnn android performance of yolox * add ut * fix lint * fix None bugs for ncnn * test codecov * test codecov * add device * fix yapf * remove if-else for img shape * use channelshuffle optimize * change benchmark after channelshuffle * fix yapf * fix yapf * fuse continuous reshape * fix static shape deploy * fix code * drop pad * only static shape * fix static * fix docstring * Added mask overlay to output image, changed fprintf info messages to … (#55) * Added mask overlay to output image, changed fprintf info messages to stdout * Improved box filtering (filter area/score), make sure roi coordinates stay within bounds * clang-format * Support UNet in mmseg (#77) * Repeatdataset in train has no CLASSES & PALETTE * update result for unet * update docstring for mmdet * remove ppl for unet in docs * fix ort wrap about input type (#81) * Fix memleak (#86) * delete [] * fix build error when enble MMDEPLOY_ACTIVE_LEVEL * fix lint * [Doc] Nano benchmark and tutorial (#71) * add cls benchmark * add nano zh-cn benchmark and en tutorial * add device row * add doc path to index.rst * fix typo * [Fix] fix missing deploy_core (#80) * fix missing deploy_core * mv flag to demo * target link * [Docs] Fix links in Chinese doc (#84) * Fix docs in Chinese link * Fix links * Delete symbolic link and add links to html * delete files * Fix link * [Feature] Add docker files (#67) * add gpu and cpu dockerfile * fix lint * fix cpu docker and remove redundant * use pip instead * add build arg and readme * fix grammar * update readme * add chinese doc for dockerfile and add docker build to build.md * grammar * refine dockerfiles * add FAQs * update Dpplcv_DIR for SDK building * remove mmcls * add sdk demos * fix typo and lint * update FAQs * [Fix]fix check_env (#101) * fix check_env * update * Replace convert_syncbatchnorm in mmseg (#93) * replace convert_syncbatchnorm with revert_sync_batchnorm from mmcv * change logger * [Doc] Update FAQ for TensorRT (#96) * update FAQ * comment * [Docs]: Update doc for openvino installation (#102) * fix docs * fix docs * fix docs * fix mmcv version * fix docs * rm blank line * simplify non batch nms (#99) * [Enhacement] Allow test.py to save evaluation results (#108) * Add log file * Delete debug code * Rename logger * resolve comments * [Enhancement] Support mmocr v0.4+ (#115) * support mmocr v0.4+ * 0.4.0 -> 0.4.1 * fix onnxruntime wrapper for gpu inference (#123) * fix ncnn wrapper for ort-gpu * resolve comment * fix lint * Fix typo (#132) * lock mmcls version (#131) * [Enhancement] upgrade isort in pre-commit config (#141) * [Enhancement] upgrade isort in pre-commit config by refering to mmflow pr #87 * fix lint * remove .isort.cfg and put its known_third_party to setup.cfg * Fix ci for mmocr (#144) * fix mmocr unittests * remove useless * lock mmdet maximum version to 2.20 * pip install -U numpy * Fix capture_output (#125) Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: lvhan028 <lvhan_028@163.com> Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com> Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com> * configs for all tasks * use torchvision roi align * remote unnecessary code * fix ut * fix ut * export * det dynamic * det dynamic * add ut * fix ut * add ut and docs * fix ut * skip torchscript ut if no ops available * add torchscript option to build.md * update benchmark and resolve comments * resolve conflicts * rename configs * fix mrcnn cuda test * remove useless * add version requirements to docs and comments to codes * enable empty image exporting for torchscript and accelerate ORT inference for MRCNN * rebase * update example for torchscript.md * update FAQs for torchscript.md * resolve comments * only use torchvision roi_align for torchscript * fix ut * use torchvision roi align when pool model is avg * resolve comments Co-authored-by: grimoire <streetyao@live.com> Co-authored-by: grimoire <yaoqian@sensetime.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: lvhan028 <lvhan_028@163.com> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com> Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com> * Update supported mmseg models (#181) * fix ocrnet cascade decoder * update mmseg support models * update mmseg configs * support emanet and icnet * set max K of TopK for tensorrt * update supported models for mmseg in docs * add test for emamodule * add configs and update docs * Update docs * update benchmark * [Features]Support mmdet3d (#103) * add mmdet3d code * add code * update code * [log]This commit finish pointpillar export and evaluate on onnxruntime.The model is sample with nvidia repo model * add tensorrt config * fix config * update * support for tensorrt * add config * fix config` * fix apis about torch2onnx * update * mmdet3d deploy version1.0 * map is ok * fix code * version1.0 * fix code * fix visual * fix bug * tensorrt support success * add docstring * add docs * fix docs * fix comments * fix comment * fix comment * fix openvino wrapper * add unit test * fix device about cpu * fix comment * fix show_result * fix lint * fix requirments * remove ci about det3d * fix ut * add ut data * support for new version pointpillars * fix comment * fix support_list * fix comments * fix config name * [Enhancement] Update pad logic in detection heads (#168) * pad with register * fix lint Co-authored-by: AllentDan <dongchunyu@sensetime.com> * [Enhancement] Additional arguments support for OpenVINO Model Optimizer (#178) * Add mo args. * [Docs]: update docs and argument descriptions (#196) * bump version to v0.4.0 * update docs and argument descriptions * revert version change * fix unnecessary change of config for dynamic exportation (#199) * fix mmcls get classes (#215) * fix mmcls get classes * resolve comment * resolve comment * Add ModelOptimizerOptions. * Fix merge bugs. * Update mmpose.md (#224) * [Dostring]add example in apis docstring (#214) * add example in apis docstring * add backend example in docstring * rm blank line * Fixed get_mo_options_from_cfg args * fix l2norm test Co-authored-by: RunningLeon <mnsheng@yeah.net> Co-authored-by: Haofan Wang <frankmiracle@outlook.com> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: grimoire <yaoqian@sensetime.com> * [Enhancement] Switch to statically typed Value::Any (#209) * replace std::any with StaticAny * fix __compare_typeid * remove fallback id support * constraint on traits::TypeId<T>::value * fix includes * support for centerpoint * [Enhancement] TensorRT DCN support (#205) * add tensorrt dcn support * fix lint * add docstring and dcn model support * add centerpoint ut and docs * add config and fix input rank * fix merge error * fix a bug * fix comment * [Doc] update benchmark add supported-model-list (#286) * update benchmark add supported-model-list * fix lint * fix lint * loc mmocr maximum version * fix ut Co-authored-by: maningsheng <mnsheng@yeah.net> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: grimoire <streetyao@live.com> Co-authored-by: grimoire <yaoqian@sensetime.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: lvhan028 <lvhan_028@163.com> Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com> Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com> Co-authored-by: AllentDan <dongchunyu@sensetime.com> Co-authored-by: Haofan Wang <frankmiracle@outlook.com> Co-authored-by: lzhangzz <lzhang329@gmail.com> Co-authored-by: maningsheng <mnsheng@yeah.net> Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: grimoire <streetyao@live.com> Co-authored-by: grimoire <yaoqian@sensetime.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: Johannes L <tehkillerbee@users.noreply.github.com> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: 杨培文 (Yang Peiwen) <915505626@qq.com> Co-authored-by: Semyon Bevzyuk <semen.bevzuk@gmail.com> Co-authored-by: AllentDan <dongchunyu@sensetime.com> Co-authored-by: Haofan Wang <frankmiracle@outlook.com> Co-authored-by: lzhangzz <lzhang329@gmail.com> Co-authored-by: Chen Xin <xinchen.tju@gmail.com> Co-authored-by: chenxin2 <chenxin2@sensetime.com>
2022-04-01 18:14:23 +08:00
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_CORE_MPL_STATIC_ANY_H_
#define MMDEPLOY_CSRC_CORE_MPL_STATIC_ANY_H_
#include <cstdint>
#include <memory>
#include <stdexcept>
#include <type_traits>
#include <utility>
// re-implementation of std::any, relies on static type id instead of RTTI.
// adjusted from libc++-10
namespace mmdeploy {
namespace traits {
using type_id_t = uint64_t;
template <class T>
struct TypeId {
static constexpr type_id_t value = 0;
};
template <>
struct TypeId<void> {
static constexpr auto value = static_cast<type_id_t>(-1);
};
// ! This only works when calling inside mmdeploy namespace
#define MMDEPLOY_REGISTER_TYPE_ID(type, id) \
namespace traits { \
template <> \
struct TypeId<type> { \
static constexpr type_id_t value = id; \
}; \
}
} // namespace traits
namespace detail {
template <typename T>
struct is_in_place_type_impl : std::false_type {};
template <typename T>
struct is_in_place_type_impl<std::in_place_type_t<T>> : std::true_type {};
template <typename T>
struct is_in_place_type : public is_in_place_type_impl<T> {};
} // namespace detail
class BadAnyCast : public std::bad_cast {
public:
const char* what() const noexcept override { return "BadAnyCast"; }
};
[[noreturn]] inline void ThrowBadAnyCast() {
#if __cpp_exceptions
throw BadAnyCast{};
#else
std::abort();
#endif
}
// Forward declarations
class StaticAny;
template <class ValueType>
std::add_pointer_t<std::add_const_t<ValueType>> static_any_cast(const StaticAny*) noexcept;
template <class ValueType>
std::add_pointer_t<ValueType> static_any_cast(StaticAny*) noexcept;
namespace __static_any_impl {
using _Buffer = std::aligned_storage_t<3 * sizeof(void*), std::alignment_of_v<void*>>;
template <class T>
using _IsSmallObject =
std::integral_constant<bool, sizeof(T) <= sizeof(_Buffer) &&
std::alignment_of_v<_Buffer> % std::alignment_of_v<T> == 0 &&
std::is_nothrow_move_constructible_v<T>>;
enum class _Action { _Destroy, _Copy, _Move, _Get, _TypeInfo };
union _Ret {
void* ptr_;
traits::type_id_t type_id_;
};
template <class T>
struct _SmallHandler;
template <class T>
struct _LargeHandler;
template <class T>
inline bool __compare_typeid(traits::type_id_t __id) {
if (__id && __id == traits::TypeId<T>::value) {
return true;
}
return false;
}
template <class T>
using _Handler = std::conditional_t<_IsSmallObject<T>::value, _SmallHandler<T>, _LargeHandler<T>>;
} // namespace __static_any_impl
class StaticAny {
public:
constexpr StaticAny() noexcept : h_(nullptr) {}
StaticAny(const StaticAny& other) : h_(nullptr) {
if (other.h_) {
other.__call(_Action::_Copy, this);
}
}
StaticAny(StaticAny&& other) noexcept : h_(nullptr) {
if (other.h_) {
other.__call(_Action::_Move, this);
}
}
template <class ValueType, class T = std::decay_t<ValueType>,
class = std::enable_if_t<
!std::is_same<T, StaticAny>::value && !detail::is_in_place_type<ValueType>::value &&
std::is_copy_constructible<T>::value && traits::TypeId<T>::value>>
explicit StaticAny(ValueType&& value);
template <
class ValueType, class... Args, class T = std::decay_t<ValueType>,
class = std::enable_if_t<std::is_constructible<T, Args...>::value &&
std::is_copy_constructible<T>::value && traits::TypeId<T>::value>>
explicit StaticAny(std::in_place_type_t<ValueType>, Args&&... args);
template <class ValueType, class U, class... Args, class T = std::decay_t<ValueType>,
class = std::enable_if_t<
std::is_constructible<T, std::initializer_list<U>&, Args...>::value &&
std::is_copy_constructible<T>::value && traits::TypeId<T>::value>>
explicit StaticAny(std::in_place_type_t<ValueType>, std::initializer_list<U>, Args&&... args);
~StaticAny() { this->reset(); }
StaticAny& operator=(const StaticAny& rhs) {
StaticAny(rhs).swap(*this);
return *this;
}
StaticAny& operator=(StaticAny&& rhs) noexcept {
StaticAny(std::move(rhs)).swap(*this);
return *this;
}
template <
class ValueType, class T = std::decay_t<ValueType>,
class = std::enable_if_t<!std::is_same<T, StaticAny>::value &&
std::is_copy_constructible<T>::value && traits::TypeId<T>::value>>
StaticAny& operator=(ValueType&& v);
template <
class ValueType, class... Args, class T = std::decay_t<ValueType>,
class = std::enable_if_t<std::is_constructible<T, Args...>::value &&
std::is_copy_constructible<T>::value && traits::TypeId<T>::value>>
T& emplace(Args&&... args);
template <class ValueType, class U, class... Args, class T = std::decay_t<ValueType>,
class = std::enable_if_t<
std::is_constructible<T, std::initializer_list<U>&, Args...>::value &&
std::is_copy_constructible<T>::value && traits::TypeId<T>::value>>
T& emplace(std::initializer_list<U>, Args&&...);
void reset() noexcept {
if (h_) {
this->__call(_Action::_Destroy);
}
}
void swap(StaticAny& rhs) noexcept;
bool has_value() const noexcept { return h_ != nullptr; }
traits::type_id_t type() const noexcept {
if (h_) {
return this->__call(_Action::_TypeInfo).type_id_;
} else {
return traits::TypeId<void>::value;
}
}
private:
using _Action = __static_any_impl::_Action;
using _Ret = __static_any_impl::_Ret;
using _HandleFuncPtr = _Ret (*)(_Action, const StaticAny*, StaticAny*, traits::type_id_t info);
union _Storage {
constexpr _Storage() : ptr_(nullptr) {}
void* ptr_;
__static_any_impl::_Buffer buf_;
};
_Ret __call(_Action a, StaticAny* other = nullptr, traits::type_id_t info = 0) const {
return h_(a, this, other, info);
}
_Ret __call(_Action a, StaticAny* other = nullptr, traits::type_id_t info = 0) {
return h_(a, this, other, info);
}
template <class>
friend struct __static_any_impl::_SmallHandler;
template <class>
friend struct __static_any_impl::_LargeHandler;
template <class ValueType>
friend std::add_pointer_t<std::add_const_t<ValueType>> static_any_cast(const StaticAny*) noexcept;
template <class ValueType>
friend std::add_pointer_t<ValueType> static_any_cast(StaticAny*) noexcept;
_HandleFuncPtr h_ = nullptr;
_Storage s_;
};
namespace __static_any_impl {
template <class T>
struct _SmallHandler {
static _Ret __handle(_Action action, const StaticAny* self, StaticAny* other,
traits::type_id_t info) {
_Ret ret;
ret.ptr_ = nullptr;
switch (action) {
case _Action::_Destroy:
__destroy(const_cast<StaticAny&>(*self));
break;
case _Action::_Copy:
__copy(*self, *other);
break;
case _Action::_Move:
__move(const_cast<StaticAny&>(*self), *other);
break;
case _Action::_Get:
ret.ptr_ = __get(const_cast<StaticAny&>(*self), info);
break;
case _Action::_TypeInfo:
ret.type_id_ = __type_info();
break;
}
return ret;
}
template <class... Args>
static T& __create(StaticAny& dest, Args&&... args) {
T* ret = ::new (static_cast<void*>(&dest.s_.buf_)) T(std::forward<Args>(args)...);
dest.h_ = &_SmallHandler::__handle;
return *ret;
}
private:
template <class... Args>
static void __destroy(StaticAny& self) {
T& value = *static_cast<T*>(static_cast<void*>(&self.s_.buf_));
value.~T();
self.h_ = nullptr;
}
template <class... Args>
static void __copy(const StaticAny& self, StaticAny& dest) {
_SmallHandler::__create(dest, *static_cast<const T*>(static_cast<const void*>(&self.s_.buf_)));
}
static void __move(StaticAny& self, StaticAny& dest) {
_SmallHandler::__create(dest, std::move(*static_cast<T*>(static_cast<void*>(&self.s_.buf_))));
__destroy(self);
}
static void* __get(StaticAny& self, traits::type_id_t info) {
if (__static_any_impl::__compare_typeid<T>(info)) {
return static_cast<void*>(&self.s_.buf_);
}
return nullptr;
}
static traits::type_id_t __type_info() { return traits::TypeId<T>::value; }
};
template <class T>
struct _LargeHandler {
static _Ret __handle(_Action action, const StaticAny* self, StaticAny* other,
traits::type_id_t info) {
_Ret ret;
ret.ptr_ = nullptr;
switch (action) {
case _Action::_Destroy:
__destroy(const_cast<StaticAny&>(*self));
break;
case _Action::_Copy:
__copy(*self, *other);
break;
case _Action::_Move:
__move(const_cast<StaticAny&>(*self), *other);
break;
case _Action::_Get:
ret.ptr_ = __get(const_cast<StaticAny&>(*self), info);
break;
case _Action::_TypeInfo:
ret.type_id_ = __type_info();
break;
}
return ret;
}
template <class... Args>
static T& __create(StaticAny& dest, Args&&... args) {
using _Alloc = std::allocator<T>;
_Alloc alloc;
auto dealloc = [&](T* p) { alloc.deallocate(p, 1); };
std::unique_ptr<T, decltype(dealloc)> hold(alloc.allocate(1), dealloc);
T* ret = ::new ((void*)hold.get()) T(std::forward<Args>(args)...);
dest.s_.ptr_ = hold.release();
dest.h_ = &_LargeHandler::__handle;
return *ret;
}
private:
static void __destroy(StaticAny& self) {
delete static_cast<T*>(self.s_.ptr_);
self.h_ = nullptr;
}
static void __copy(const StaticAny& self, StaticAny& dest) {
_LargeHandler::__create(dest, *static_cast<const T*>(self.s_.ptr_));
}
static void __move(StaticAny& self, StaticAny& dest) {
dest.s_.ptr_ = self.s_.ptr_;
dest.h_ = &_LargeHandler::__handle;
self.h_ = nullptr;
}
static void* __get(StaticAny& self, traits::type_id_t info) {
if (__static_any_impl::__compare_typeid<T>(info)) {
return static_cast<void*>(self.s_.ptr_);
}
return nullptr;
}
static traits::type_id_t __type_info() { return traits::TypeId<T>::value; }
};
} // namespace __static_any_impl
template <class ValueType, class T, class>
StaticAny::StaticAny(ValueType&& v) : h_(nullptr) {
__static_any_impl::_Handler<T>::__create(*this, std::forward<ValueType>(v));
}
template <class ValueType, class... Args, class T, class>
StaticAny::StaticAny(std::in_place_type_t<ValueType>, Args&&... args) {
__static_any_impl::_Handler<T>::__create(*this, std::forward<Args>(args)...);
}
template <class ValueType, class U, class... Args, class T, class>
StaticAny::StaticAny(std::in_place_type_t<ValueType>, std::initializer_list<U> il, Args&&... args) {
__static_any_impl::_Handler<T>::__create(*this, il, std::forward<Args>(args)...);
}
template <class ValueType, class, class>
inline StaticAny& StaticAny::operator=(ValueType&& v) {
StaticAny(std::forward<ValueType>(v)).swap(*this);
return *this;
}
template <class ValueType, class... Args, class T, class>
inline T& StaticAny::emplace(Args&&... args) {
reset();
return __static_any_impl::_Handler<T>::__create(*this, std::forward<Args>(args)...);
}
template <class ValueType, class U, class... Args, class T, class>
inline T& StaticAny::emplace(std::initializer_list<U> il, Args&&... args) {
reset();
return __static_any_impl::_Handler<T>::_create(*this, il, std::forward<Args>(args)...);
}
inline void StaticAny::swap(StaticAny& rhs) noexcept {
if (this == &rhs) {
return;
}
if (h_ && rhs.h_) {
StaticAny tmp;
rhs.__call(_Action::_Move, &tmp);
this->__call(_Action::_Move, &rhs);
tmp.__call(_Action::_Move, this);
} else if (h_) {
this->__call(_Action::_Move, &rhs);
} else if (rhs.h_) {
rhs.__call(_Action::_Move, this);
}
}
inline void swap(StaticAny& lhs, StaticAny& rhs) noexcept { lhs.swap(rhs); }
template <class T, class... Args>
inline StaticAny make_static_any(Args&&... args) {
return StaticAny(std::in_place_type<T>, std::forward<Args>(args)...);
}
template <class T, class U, class... Args>
StaticAny make_static_any(std::initializer_list<U> il, Args&&... args) {
return StaticAny(std::in_place_type<T>, il, std::forward<Args>(args)...);
}
template <class ValueType>
ValueType static_any_cast(const StaticAny& v) {
using _RawValueType = std::remove_cv_t<std::remove_reference_t<ValueType>>;
static_assert(std::is_constructible<ValueType, const _RawValueType&>::value,
"ValueType is required to be a const lvalue reference "
"or a CopyConstructible type");
auto tmp = static_any_cast<std::add_const_t<_RawValueType>>(&v);
if (tmp == nullptr) {
ThrowBadAnyCast();
}
return static_cast<ValueType>(*tmp);
}
template <class ValueType>
inline ValueType static_any_cast(StaticAny& v) {
using _RawValueType = std::remove_cv_t<std::remove_reference_t<ValueType>>;
static_assert(std::is_constructible<ValueType, _RawValueType&>::value,
"ValueType is required to be an lvalue reference "
"or a CopyConstructible type");
auto tmp = static_any_cast<_RawValueType>(&v);
if (tmp == nullptr) {
ThrowBadAnyCast();
}
return static_cast<ValueType>(*tmp);
}
template <class ValueType>
inline ValueType static_any_cast(StaticAny&& v) {
using _RawValueType = std::remove_cv_t<std::remove_reference_t<ValueType>>;
static_assert(std::is_constructible<ValueType, _RawValueType>::value,
"ValueType is required to be an rvalue reference "
"or a CopyConstructible type");
auto tmp = static_any_cast<_RawValueType>(&v);
if (tmp == nullptr) {
ThrowBadAnyCast();
}
return static_cast<ValueType>(std::move(*tmp));
}
template <class ValueType>
inline std::add_pointer_t<std::add_const_t<ValueType>> static_any_cast(
const StaticAny* __any) noexcept {
static_assert(!std::is_reference<ValueType>::value, "ValueType may not be a reference.");
return static_any_cast<ValueType>(const_cast<StaticAny*>(__any));
}
template <class RetType>
inline RetType __pointer_or_func_test(void* p, std::false_type) noexcept {
return static_cast<RetType>(p);
}
template <class RetType>
inline RetType __pointer_or_func_test(void*, std::true_type) noexcept {
return nullptr;
}
template <class ValueType>
std::add_pointer_t<ValueType> static_any_cast(StaticAny* any) noexcept {
using __static_any_impl::_Action;
static_assert(!std::is_reference<ValueType>::value, "ValueType may not be a reference.");
using ReturnType = std::add_pointer_t<ValueType>;
if (any && any->h_) {
void* p = any->__call(_Action::_Get, nullptr, traits::TypeId<ValueType>::value).ptr_;
return __pointer_or_func_test<ReturnType>(p, std::is_function<ValueType>{});
}
return nullptr;
}
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_CORE_MPL_STATIC_ANY_H_