Compare commits

...

167 Commits

Author SHA1 Message Date
RunningLeon 5a9ac8765d
Fix readthedocs ()
* [Fix]: limit urllib3 for readthedocs ()

* fix readthedocs for zh_cn

* fix
2023-05-31 15:18:47 +08:00
AllentDan 6cd77c66b7
add deform conv v3 plugin ()
* add deform conv v3 plugin

* update doc

* resolve comments

* update description
2023-05-23 10:22:47 +08:00
RunningLeon 335ef8648d
fix mmseg exportation for out_channels=1 () 2023-05-04 12:51:05 +08:00
RunningLeon c73756366e
bump version to v0.14.0 ()
* update

* bump version

* Update README.md

* fix conflicts

* fix ci

* fix circleci

* upgrade to ubuntu20.04 for github ci

* update

* install glibc

* try to fix cuda build

* try to fix cuda build

* fix build-cu102 && build_cpu_sdk

* revert to setup-python@v2

* try to fix pplnn

* fix protobuf

---------

Co-authored-by: Xin Chen <irexyc@gmail.com>
2023-04-05 14:28:00 +08:00
Chen Xin af16b9a451
Update get_started.md(master) ()
* update generate_build_config to support cxx11abi tag

* test prebuild ci

* update docs/zh_cn/get_started.md

* update docs/en/get_started.md

* fix prebuild ci

* update prebuilt_package_windows.md

* update prebuild ci deps

* try to fix prebuild ci

* fix prebuild ci

* fix prebuild ci

* remove trigger [no ci]

* fix name [no ci]

* fix typos

* fix script
2023-04-04 11:00:53 +08:00
tpoisonooo 1411ed3858
[improvement]: openvino upgrade to 2022.3.0 () 2023-03-31 14:08:07 +08:00
Chen Xin bbee83da6c
update pplnn to v0.9.2 to resolve ci error ()
* [Fix] Fix package_tools ()

* copy mmdeploy_onnx2ncnn when build wheel package

* prevent copy build/lib/* when build wheel

* fix mmdeploy_builder.py

* try to fix backend-pplnn ci
2023-03-29 21:14:59 +08:00
Junhwa Song f7c484a046
Add support for converting a inpainting model to ONNX and TensorRT ()
* Add support for inpainting models

* Add configs

* Add comment

* Refactor

* Add test code for inpainting task

* Fix

* Fix

* Update

* Fix

* Fix

* Update docs

* Update

* Fix visualization

* Handle case without Resize
2023-03-29 19:17:24 +08:00
Chen Xin aae9f32623
[Refactor] Rename mmdeploy_python to mmdeploy_runtime ()
* [Feature]: Add github prebuild workflow after new release. ()

* add prebuild dockerfile

* add prebuild test workflw

* update

* update

* rm other workflow for test

* Update docker image

* add win1o prebuild

* add test prebuild

* add windows scripts in prebuilt package

* add linux scripts in prebuilt package

* generate_build_config.py

* fix cudnn search

* fix env

* fix script

* fix rpath

* fix cwd

* fix windows

* fix lint

* windows prebuild ci

* linux prebuild ci

* fix

* update trigger

* Revert "rm other workflow for test"

This reverts commit 0a03872750.

* update sdk build readme

* update prebuild

* fix dll deps for python >= 3.8 on windows

* fix ci

* test prebuild

* update test script to avoid modify upload folder

* add onnxruntime.dll to mmdeploy_python

* update prebuild workflow

* update prebuild

* Update loader.cpp.in

* remove exists prebuild files

* fix opencv env

* update cmake options for mmdeploy python build

* remove test code

* fix lint

---------

Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: RunningLeon <maningsheng@sensetime.com>

* rename mmdeploy_python -> mmdeploy_runtime

* test master prebuild

* fix trt net build

* Revert "test master prebuild"

This reverts commit aad5258648.

* add master branch

* fix linux set_env script

* update package_tools docs

* fix gcc 7.3 aligned_alloc

* comment temporarily as text_det_recog can't be built with prebuild package built under manylinux

---------

Co-authored-by: RunningLeon <mnsheng@yeah.net>
Co-authored-by: RunningLeon <maningsheng@sensetime.com>
2023-03-29 19:02:37 +08:00
Chen Xin c7003bb76a
[Fix] Fix CascadeRoIHead export when reg_class_agnostic=True in box_head ()
* fix convnext

* fix batch inference

* update docs

* add regression test config

* fix pose_tracker.cpp lint
2023-03-28 20:59:26 +08:00
Damon Da Tong d181311dee
fix pose_tracker python api will raise ValueError when result has no human () 2023-03-27 14:50:53 +08:00
RunningLeon 39c3282966
[Fix]: update stale workflow ()
* fix

* add job permission

* update
2023-03-23 14:59:56 +08:00
kumailf dba46c3496
fix typo in docs/en/07-developer-guide/regression_test.md () 2023-03-23 13:33:53 +08:00
Li Zhang 140e0519e6
[Fix] Export `mmdeploy` only in monolithic build ()
* export only `mmdeploy` in monolithic build

* export dynamic backends
2023-03-21 11:21:19 +08:00
Chen Xin 06dac732c9
optimize mmpose postprocess () 2023-03-21 11:06:18 +08:00
hanrui1sensetime 34c68663b6
[Sync] Sync Java API to master ()
* sync rotated detector java api to master

* sync mmseg score output to master

* sync java docs for demo

* sync java docs for master
2023-03-13 11:31:39 +08:00
tpoisonooo 48291f01c8
docs(project): highlight version () 2023-03-13 10:23:52 +08:00
Li Zhang 12a130262f
add unified device guard () 2023-03-10 19:16:13 +08:00
Li Zhang bcb93ead58
[Enhancement] Add optional `softmax` in `LinearClsHead` ()
* add softmax in cls postprocess

* minor
2023-03-09 16:54:15 +08:00
Shengxi Li f69c636a2e
mmocr FPNC neck support asf module ()
* mmocr FPNC neck support asf module

* mmocr FPNC neck support asf module

---------

Co-authored-by: lishengxi <mtdp@MacBook-Pro-8.local>
2023-03-03 15:27:02 +08:00
Li Zhang cb964f6a58
[Fix] Fix Debian aarch64 cross compiling ()
* fix debian cross compiling

* comment

* minor
2023-03-03 12:41:52 +08:00
Chen Xin d95950d705
[Feature] Dynamically load net module to remove dependencies of mmdeploy.so () ()
* dynamic load net module

* export xxx_net

* add runpath

* link dl

* remove -ldl for macos

* fix rpath

* module -> shared

* set MMDEPLOY_DYNAMIC_BACKEND OFF when MMDEPLOY_BUILD_SDK_MONOLITHIC is OFF
2023-03-03 11:48:59 +08:00
Chen Xin 7de413a19c
[Feature] Sync csharp apis with newly added c apis && demo ()
* sync c api to c#

* fix typo

* add pose tracker c# demo

* udpate gitignore

* remove print

* fix lint

* update rotated detection api

* update rotated detection demo

* rename pose_tracking -> pose_tracker

* use input size as default
2023-03-02 09:20:41 +08:00
lvhan028 7fed511f09
disable building demos when preparing prebuilt package () 2023-02-28 21:16:35 +08:00
Li Zhang 7029e90064
avoid linking static libs in monolithic build () 2023-02-23 14:21:51 +08:00
Li Zhang f78a452681
fix missing include for gcc-10 build () 2023-02-23 12:05:25 +08:00
Li Zhang eb75bee921
add `Model::ReadConfig` & simplify handle creation () 2023-02-21 17:09:07 +08:00
SineYuan c941045156
fix normalization to_rgb option () 2023-02-21 17:08:34 +08:00
tripleMu 4bb8920b61
Fix trtlogger instead of mm logger ()
* Fix trtlogger instead of mm logger

* Reset trt logger to mmdeploy logger

* rename logger name
2023-02-20 16:45:47 +08:00
YH fd47fa2071
[Enhance] support TensorRT engine for onnxruntime ()
* Support trt engine for onnxruntime

* Apply lint

* Check trt execution provider

* Fix typo

* Fix provider order

* Check device
2023-02-20 14:18:09 +08:00
Li Zhang b1be9c67f3
[Fix] Fix palette generation on opencv-3.x () 2023-02-17 18:33:38 +08:00
q.yao 02d5a09989
bump version to 0.13.0 () 2023-02-16 14:15:19 +08:00
Songki Choi fa9aaa9d61
[Enhancement] Loosen protobuf version criteria for onnx upgrade ()
- onnx<1.13.0 has high security issue
  (https://github.com/advisories/GHSA-ffxj-547x-5j7c)

- Python packages depending on mmdeploy cannot upgrade onnx as
  - onnx==1.13.0 depends on protobuf>=3.20.2
  - mmdeploy depends on protobuf<=3.20.1

- Suggesting [protobuf<=3.20.2] for quick solution

Signed-off-by: Songki Choi <songki.choi@intel.com>
2023-02-15 16:03:15 +08:00
Chen Xin 599c701655
[Enhancement] Support cmake configure when system exists multiple cuda versions. ()
* update cmake

* typos
2023-02-14 16:20:47 +08:00
Eugene Liu e519898adb
Fix bug in remove_imports ()
* Fix bug in remove_imports

IndexError: list index out of range error as `model.opset_import list` is changing dynamically

* pre-commit fix
2023-02-13 19:46:24 +08:00
Li Zhang cadc2658f3
Fix `WarpBbox` and memory leak in `TextRecognizer` () 2023-02-13 19:43:25 +08:00
q.yao 0f5b149557
fix instance norm double free () 2023-02-13 19:42:37 +08:00
Li Zhang 31b099a37b
add coco whole-body skeleton () 2023-02-09 18:31:35 +08:00
AllentDan a3311b0bbb
enable TRT parse ONNX model from file () 2023-02-09 15:54:37 +08:00
Li Zhang f5a05b5219
[Refactor] Support batch inference with shape clustering ()
* refactor `NetModule`

* name

* fix sorting

* fix indices
2023-02-08 20:14:28 +08:00
q.yao d8e4a78636
[Improvement] Better unit test. ()
* update test for mmcls and mmdet

* update det3d mmedit mmocr mmpose mmrotate

* update mmseg

* bug fixing

* refactor ops

* rename variable

* remove comment
2023-02-08 11:30:59 +08:00
Li Zhang 5de0ecfcaf
[Fix] Add an option to flip webcam inputs for pose tracker demo () 2023-02-07 20:27:43 +08:00
Li Zhang 2b18596795
[Enhancement] Optimize C++ demos ()
* optimize demos

* show text in image

* optimize demos

* fix minor

* fix minor

* fix minor

* install utils & fix demo file extensions

* rename

* parse empty flags

* antialias

* handle video complications
2023-02-07 19:08:46 +08:00
RunningLeon 31f422244b
fix stale workflow ()
* fix stale workflow

* Update stale.yml
2023-02-06 21:17:26 +08:00
AllentDan 12b3d18c7a
[Fix] fix torch allocator resouce releasing ()
* delete root logger and add condition before calling caching_allocator_delete

* fix lint error

* use torch._C._cuda_cudaCachingAllocator_raw_delete
2023-02-06 11:35:44 +08:00
Chen Xin b85f34141b
[Feature] Support feature map output for mmsegmentation ()
* add feature map output for mmseg

* update api

* update demo

* fix return

* update format_shape

* fix lint

* update csharp demo

* update python demo && api

* fix coreml build

* fix lint

* better sort

* update

* update cpp demo & add missing header

* change to CHW

* update csharp api

* update isort version to 5.12.0

* fix python api

* fix log

* more detail api docs

* isort support python3.7

* remove isort change

* remove whitespace

* axes check

* remove FormatShapeImpl

* minor

* add permute tc

* remove stride buffer
2023-02-03 20:47:55 +08:00
AllentDan 7d085bee0e
directly set pytorch metric when it's empty for regression_test.py () 2023-02-03 11:25:11 +08:00
KerwinKai 23eed5c265
[Bug] Fixed ncnn model conversion errors in Dockerfile(no module name 'ncnn. ncnn',) ()
* Update Dockerfile

* Update Dockerfile

* Update Dockerfile
2023-02-03 10:22:25 +08:00
q.yao 5fdf00324b
[Fix] add bounds to avoid large resource usage of nms operator on jetson ()
* fix trt nms jetson

* update-for-comment

* clang format
2023-02-01 14:11:43 +08:00
q.yao 99d6fb3190
fix ascend () 2023-01-31 16:49:41 +08:00
tripleMu 85320df2b4
Fix isort lint error by upgrading it to 5.11.5 () 2023-01-31 13:31:59 +08:00
lvhan028 b101a4af65
[Enhancement] remove MMDEPLOY_BUILD_SDK_CXX_API option ()
* remove MMDEPLOY_BUILD_SDK_CXX_API option

* update

* update
2023-01-31 13:29:59 +08:00
Li Zhang 3d425bbb9f
[Feature] Pose tracker C/C++/Python API&demos ()
* add PoseTracker API

* add mahalanobis distance, add det_pose demo

* simplify api

* simplify api

* fix cmake & fix `CropResizePad`

* ignore out of frame bboxes

* clean-up

* fix lint

* add c api docs

* add c++ api docs/comments

* fix gcc7 build

* fix gcc7+opencv3

* fix stupid lint

* fix ci

* add help info & webcam support for C++ pose tracker demo

* add webcam support for Python pose tracker demo

* fix lint

* minor

* minor

* fix MSVC build

* fix python binding

* simplify module adapter

* fix module adapter

* minor fix
2023-01-31 11:24:24 +08:00
AllentDan 093badf90c
fix rknn output index error in SDK () 2023-01-30 20:50:06 +08:00
q.yao 8a050f10dc
suppress onnx optimizer warning () 2023-01-20 00:25:50 +08:00
tpoisonooo 7e48fb2905
improvement(tools/scripts): pip install with user environment () 2023-01-20 00:19:37 +08:00
Li Zhang 8bb3fcc6d8
fix 'cvtcolor' error in the preprocessing of single channel images () 2023-01-20 00:04:42 +08:00
q.yao 513b1c3cfb
Fix coreml ()
* fix coreml topk

* update

* fix lint
2023-01-19 11:42:18 +08:00
kaizhong bce276ef24
[Feature]: add a tool to generate supported-backends markdown table ()
* convert2markdown

* update yaml2mardown code

* code update

* add parse_args

* add parse_args

* add parse_args

* add parse_args

* add website list

* add website list

* add website list

* add website list

* add website list

* add website list

* add website list

* add url in yaml

* add table in convert

* add table in convert

* From yaml export markdown

* From yaml export markdown

* From yaml export markdown

* From yaml export markdown

* From yaml export markdown

* From yaml export markdown

* Rename convert.py to generate_md_table.py

generate_markdownd_table

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* docs(project): sync en and zh docs

* Update mmaction.yml

* add backends parser

* add backends parser

* Add type for the codeblock.

* move to useful tools
2023-01-18 16:32:26 +08:00
tpoisonooo 968b4b0b60
fix(requirements): codebase version () 2023-01-13 16:09:37 +08:00
Chen Xin c458e2a524
[Enhancement] Speedup TopDownAffine by CropResizePad ()
* "use 'CropResizePad' to speed up topdownaffine"

* add missing header
2023-01-13 16:08:29 +08:00
hanrui1sensetime 9d3b494079
[Fix] Fix visualize api bug ()
* fix visualize api bug

* fix visualize
2023-01-13 10:42:57 +08:00
Chen Xin 9a1f4e6145
[Fix] Fix example standalone build for msvc ()
* fix example build for msvc

* move /Zc:__cplusplus to core
2023-01-11 10:55:17 +08:00
RunningLeon 3527412127
change log file extension to 'txt' in regression test() 2023-01-10 17:42:55 +08:00
Nghia 1b048d88ca
fixed script errors when calling miniconda.sh () 2023-01-10 13:52:24 +08:00
q.yao 0737a59f44
[Improvement] Support auto release note ()
* Support auto release note

* update labels
2023-01-10 10:54:25 +08:00
hanrui1sensetime d5bd0072a2
fix android build command () 2023-01-10 10:44:49 +08:00
Li Zhang e4ad0d4c45
[Fix] Fix aligned allocations on Android ()
* fix android alignment

* fix typo

* fix size
2023-01-06 18:04:28 +08:00
Li Zhang 4463572311
Fix debug build for PoseTracker () 2023-01-03 14:24:11 +08:00
Chen Xin c0ca074c11
Fix build error on windows-cuda platform () 2023-01-03 12:12:20 +08:00
lvhan028 c3986cebe8
bump version to v0.12.0 () 2022-12-30 16:17:37 +08:00
hanrui1sensetime 20b2aff660
[Fix] Fix batch inference error for Mask R-CNN ()
* fix Mask R-CNN for multi-batch

* fix flake8

* fix bug

* use Sequence instead list

* fix docstring

* only test_img accept list input
2022-12-30 14:26:33 +08:00
Li Zhang 20e0563682
[Enhancement] Optimize pose tracker ()
* sync master

* suppress overlapped tracks

* add CUDA WarpAffine

* export symbols

* fix linkage

* update pose tracker

* clean-up

* fix MSVC build

* fix MSVC build

* add ffmpeg cli command
2022-12-29 19:12:55 +08:00
tpoisonooo f62352a5fa
fix(README): badge error and add github action status ()
* fix(README): badge error and add github action status

* docs(REAMDE): fix badge

* docs(README): update color

* docs(REAMEDE): update color

* docs(README): update table
2022-12-29 16:08:29 +08:00
hanrui1sensetime f21dc4e7d3
Cherry-pick to fix ops unittest seg-fault error ()
* cherry-pick PR1352 to master

* fix test_ops with teardown and skip

* remove useless line

* fix lint

Co-authored-by: q.yao <yaoqian@sensetime.com>
2022-12-29 12:08:40 +08:00
q.yao fc98472e9c
[Refactor] update tutorial about how to support new backend () 2022-12-29 12:05:31 +08:00
Li Zhang e3f033ee5d
build monolithic SDK by default () 2022-12-28 11:44:20 +08:00
AllentDan 85b7b967ee
[Feature] Support probability output for segmentation ()
* add do_argmax flag

* sdk support

* update doc

* replace do_argmax with with_argmax

* add todo
2022-12-26 15:48:07 +08:00
q.yao d113a5f1c7
[Refactor] refactor is_available, check_env ()
* refactor is available

* remove try catch in apis

* fix trt check env

* fix ops_info

* update default value

* remove backend list

* optimial pycuda

* update requirement, check env for rknn
2022-12-23 12:06:32 +08:00
q.yao 5285caf30a
[Refactor] Add 'to_backend' in BackendManager ()
* Refactor to backend

* export_postprocess_mask = False as defailt

* update zh_cn docs

* solve comment

* fix comment
2022-12-21 14:04:16 +08:00
AllentDan 26d71ce0a8
add 'is_batched' argument to pipeline.json () 2022-12-21 10:45:00 +08:00
BuxianChen f9f351ae1f
update FAQ about copying onnxruntime dll to 'mmdeploy/lib' () 2022-12-20 14:39:31 +08:00
Michał Antoszkiewicz 202bf00eb7
Fix 'cannot seek vector iterator' in debug windows build ()
Signed-off-by: Michal Antoszkiewicz <mantoszkiewicz@codeflyers.com>

Signed-off-by: Michal Antoszkiewicz <mantoszkiewicz@codeflyers.com>
2022-12-20 10:13:33 +08:00
Chen Xin dbc4b26dc1
fix cuda10.2 build () 2022-12-16 10:15:15 +08:00
tpoisonooo 77c2ee5feb
fix(tools/scripts): build aarch option () 2022-12-15 20:41:59 +08:00
Chen Xin c72f2eaa31
[Docs] Add mmaction2 sphinx-doc link ()
* add mmaction2 sphinx doc

* consistent with other doc formats

* change title

* fix ci

* add missing coreml sphinx doc
2022-12-15 19:01:34 +08:00
GY 05ed8e16ea
update to ppl.nn v0.9.1 and ppl.cv v0.7.1 () 2022-12-13 14:21:24 +08:00
q.yao 7cb4b9b18a
[Enhancement] Support tvm ()
* finish framework

* add autotvm and auto-scheduler tuner

* add python deploy api

* add SDK net(WIP

* add sdk support

* support det, support vm

* fix vm sdk

* support two stage detector

* add instance seg support

* add docstring

* update docs and ut

* add quantize

* update doc

* update docs

* synchronize stream

* support dlpack

* remove submodule

* fix stride

* add alignment

* support dlpack

* remove submodule

* replace exclusive_scan

* add backend check

* add build script

* fix comment

* add ci

* fix ci

* ci fix2

* update build script

* update ci

* add pytest

* update sed command

* update sed again

* add xgboost

* remove tvm ut

* update ansor runner

* add stream sync

* fix topk

* sync default stream

* fix tvm net

* fix window
2022-12-12 21:19:40 +08:00
q.yao ac47cad407
[Improvements] Support TorchAllocator as TensorRT Gpu Allocator ()
* add TorchAllocator for TensorRT

* check mdcn input shape
2022-12-12 18:43:59 +08:00
Chen Xin 52fd4fe9f3
[Fix] Remove cudnn dependency for transform 'mmaction2::format_shape' ()
* fix format shape

* merge common code

* use throw_exception

* udpate code format
2022-12-12 14:34:15 +08:00
RunningLeon ae785f42e1
fix typo: rename 'stable.yml' as 'stale.yml' () 2022-12-12 11:17:40 +08:00
q.yao af4d304004
support torchjit mdcn () 2022-12-12 10:28:49 +08:00
RunningLeon c1ca5a3dbf
add stale workflow to check issues and PRs () 2022-12-09 18:41:36 +08:00
q.yao 1748780c91
[Refactor] ease build wrapper master ()
* ease build wrapper

* import eanum only when necessary

* update docs

* rename manager

* update for comment

* replace staticmethod with classmethod

* fix torchjit
2022-12-09 17:19:34 +08:00
q.yao 8ea3dc943f
[Fix] Fix for torch113 for master () 2022-12-08 17:17:27 +08:00
q.yao 4046e13146
Reformat multi-line logs and docstrings () 2022-12-06 19:50:58 +08:00
HinGwenWoong 7b3c3bc223
[Enhancement] Add pip source in dockerfile for `master` branch ()
* Add pip source

* Add pip source
2022-12-06 19:46:01 +08:00
RunningLeon 2a1fed91c9
bump mmdeploy sdk version to 0.11.0 () 2022-11-30 17:54:40 +08:00
RunningLeon 55a3c8cf78
Bump version to v0.11.0 ()
* bump version v0.11.0

* fix
2022-11-30 17:18:13 +08:00
q.yao bf80653446
fix gelu torch>1.12 () 2022-11-30 14:05:48 +08:00
tpoisonooo f6b35f3b68
fix(CI): ncnn script install ()
* CI(script): fix ncnn install

* docs(build): update ncnn version to 20221128

* fix(CI): trigger
2022-11-29 20:37:06 +08:00
Chen Xin 2c9861555f
[Enhancement] add mmaction.yml for test ()
* add mmaction.yml for test

* t# This is a combination of 2 commits.

add missing file

* fix typo

* remove
2022-11-29 18:48:29 +08:00
Chen Xin c97aed1a96
fix total time () 2022-11-29 13:45:50 +08:00
Xin Li 73afa61bd8
update master branch () 2022-11-29 13:42:47 +08:00
Chen Xin 0830acb40c
[FIX] Fix csharp net48 and batch inference ()
* fix csharp net48

* add missing file

* update

* fix batch inference

* update demo

* update

* update version
2022-11-29 11:48:36 +08:00
AllentDan 047ab67c78
[Fix] fix visualization for partition ()
* init

* lint

* pass output_names outside

* docstring & type hint
2022-11-29 11:40:00 +08:00
q.yao b521e7da03
fix topk () 2022-11-28 17:35:15 +08:00
hanrui1sensetime 9ea8610133
[Fix] fix ncnn torch 1.12 master ()
* fix ncnn torch 1.12 master

* remove debug line

* add docstring
2022-11-28 17:34:39 +08:00
AllentDan d9d3ded8bc
concat datasets pytorch metric () 2022-11-28 17:34:02 +08:00
Li Zhang d77aeaa480
[Refactor] Decouple preprocess operation and transformation ()
* refactor SDK registry

* fix lint

* decouple transform logic and operations

* data management

* improve data management

* improve data management

* context management

* fix ResizeOCR

* fix operation fallback logic

* fix MSVC build

* clean-up

* sync master

* fix lint

* Normalize - add `to_float`, merge `cvtcolor` operations

* fix macOS build

* rename

* cleanup

* fix lint

* fix macOS build

* fix MSVC build

* support elena

* fix

* fix

* optimize normalize

* fix

* fix MSVC build

* simplify

* profiler

* use `throw_exception`

* misc

* fix typo
2022-11-28 14:46:05 +08:00
Chen Xin 3d1c135297
[Enhancement] refactor profiler ()
* reduce profile node name

* add profiler for pipeline

* add profiler for cond

* udpate
2022-11-28 10:44:54 +08:00
Li Zhang 6468ef180d
[Fix] Relax module adapter template constraints ()
* relax module adapter constraint

* remove forwarding `operator()`
2022-11-27 11:58:09 +08:00
Li Zhang 385f5b6102
[Fix] Fix det_pose demo ()
* fix det_pose demo

* remove useless input
2022-11-27 11:56:59 +08:00
RunningLeon 10e5cf6e0f
update reg test ()
* give model path if stead of 'x' when conversion failed

* set --models with default value ['all']

* fix mmseg yml
2022-11-25 21:16:22 +08:00
q.yao 0d16f6ec30
fix pad to square ()
* fix pad to square

* fix topk

* remove comment

* recovery topk
2022-11-25 17:31:44 +08:00
q.yao 4e1c83ab5b
[Fix] fix yolohead trt8.2 ()
* fix yolohead trt8.2

* remove score_threshold
2022-11-25 15:35:09 +08:00
Li Zhang 4d4c10a2dc
[Enhancement] Avoid copying dense arrays in Python API ()
* eliminate copying for segmentor

* fix segmentor

* eliminate copying in Python API

* minor fix
2022-11-24 18:23:34 +08:00
Chen Xin 73e095a4b8
[Fix] fix mmaction2 docs ()
* fix typo and add link to README.md

* fix
2022-11-24 16:40:56 +08:00
Jiahao Sun 9bbe3c0355
[Feat] Add end2end pointpillars & centerpoint(pillar) deployment for mmdet3d ()
* add end2end pointpillars & centerpoint(pillar)

* fix centerpoint UT

* uncomment pycuda

* add nvidia copyright and remove post_process for voxel_detection_model

* keep comments

* add anchor3d_head init

* remove pycuda comment

* add pcd test sample
2022-11-24 16:14:50 +08:00
AllentDan 301035a06f
[Fix] fix cls head in SDK ()
* fix cls head

* resolve comments
2022-11-24 14:15:34 +08:00
RunningLeon de96f51231
Update regresssion test to parse eval result from json ()
* export metrics results to json

* fix mmedit

* update docs

* fix test failure

* fix

* fix mmocr metrics

* remove srgan config with no set5 test
2022-11-22 20:47:22 +08:00
tpoisonooo b23411d907
fix(tools/scripts): find env file failed ()
* fix(tools/scripts): find env file failed

* Update quantize_model.md
2022-11-22 20:26:55 +08:00
Li Zhang b5b0dcfcff
[Fix] Support onnxruntime-1.13 ()
* support onnxruntime-1.13

* fix lint
2022-11-22 20:25:44 +08:00
AllentDan 4dd4d4851b
Add rv1126 yolov3 support to sdk ()
* add yolov3 head to SDK

* add yolov5 head to SDK

* fix export-info and lint, add reverse check

* fix lint

* fix export info for yolo heads

* add output_names to partition_config

* fix typo

* config

* normalize config

* fix

* refactor config

* fix lint and doc

* c++ form

* resolve comments

* fix CI

* fix CI

* fix CI

* float strides anchors

* refine pipeline of rknn-int8

* config

* rename func

* refactor

* rknn wrapper dict and fix typo

* rknn wrapper output update,  mmcls use end2end type

* fix typo
2022-11-22 20:16:22 +08:00
Xin Li 522fcc0635
fix bad links () 2022-11-21 12:57:14 +08:00
Chen Xin cdb6b46955
Sdk profiler ()
* sdk-profiler

* fix lint

* support lift

* sync net module when profile

* use Scope*

* update use task name

* fix

* use std::unique_ptr<Event>

* remove mmdeploy::graph link for c and transform

* fix

* fix

* fix
2022-11-21 12:52:21 +08:00
tpoisonooo 938ef537a7
Improve mmdet3d doc ()
* docs(mmdet3d): add trt version desc

* docs(mmdet3d): update
2022-11-18 18:34:30 +08:00
AllentDan 0da6059342
correct ncnn-int8 config path () 2022-11-18 10:21:54 +08:00
Li Zhang 99040d5655
[Refactor] better SDK registry ()
* refactor SDK registry

* fix lint

* fix typo

* sync

* use nested namespace

* rename
2022-11-15 21:06:13 +08:00
lvhan028 b0a350d49e
build opencv for aarch64 with videoio enabled ()
* build opencv for aarch64 with videoio enabled

* update doc

* update

* update

* update
2022-11-11 10:22:15 +08:00
lvhan028 18c6ae57cf
add more images for demos and user guides () 2022-11-09 21:06:32 +08:00
Mingcong Han ff7b8fb176
[FIX] set stream argument when using async memcpy () 2022-11-09 13:41:41 +08:00
Jelle Maas 5923054bb4
Add Core ML common configuration ()
* Add CoreML instance segmentation configs

* Add common configuration for Core ML backend

* Fix pre-commit hook failures
2022-11-08 17:25:56 +08:00
Li Zhang b49cf42220
[Enhancement] Avoid copying dense arrays in C API ()
* reduce copying dense array in C API

* format

* fix detector

* fix MSVC build

* simplify
2022-11-07 22:01:31 +08:00
lvhan028 625593d6f3
[Feature] Support rv1126 in sdk ()
* tmp

* refine

* update ssd-lite

* tmp

* tmp

* 0.1

* 0.1.1

* rename to base_dense_head

* remove debug code

* wait stream

* update cmakelists

* add some comments

* fix lint

* fix ci error

* fix according reviewer comments

* update params

* fix

* support normalize with to_float being false

* fix lint

* support rv1126 build ci

* support rv1126 build ci

* change debug level

* fix ci

* update

* update doc

* fix circleci error

* update normalize

* update

* check in build script

* change name
2022-11-07 11:13:47 +08:00
Chen Xin 940fffa075
fix some errors () 2022-11-04 22:38:28 +08:00
Chen Xin d8e6229dc5
Support mmaction master ()
* cpu format shape

* convert model

* python api

* speedup dataloader

* minor

* add cpp demo

* add visualize

* fix resize param order

* export pipeline.json

* fix three crop

* read SampleFrames from model_cfg

* minor

* lint

* move to a func

* speed up format shape cpu

* use input mat device

* fix comments

* fix comments

* update docs/benchmark

* docs/supported-codebases

* update tests/data

* fix lint

* fix lint
2022-11-04 14:15:36 +08:00
AllentDan c4d428fd7d
Add model conversion support to rv1126 ()
* WIP

* fix interpolate

* support yolov3 and retinanet

* support seg

* support ssd

* supports both partition types for retinanet and ssd

* mean std doc

* update doc, add UT

* support FSAF

* rename configs

* update dump info

* update

* python package installation doc

* update doc

* update doc

* doc
2022-11-02 11:04:22 +08:00
tripleMu 487c375533
Support tensorrt own plugin ()
* Update utils.py

Initialize the plugin that comes with TensorRT

* Format code
2022-11-01 18:19:22 +08:00
lvhan028 8eff7a2eb1
upate opencv that enables video build option () 2022-11-01 17:08:05 +08:00
lvhan028 4c34ad74a1
bump version to 0.10.0 ()
* bump version to 0.10.0

* fix circleci workflow error
2022-10-31 19:24:47 +08:00
q.yao c5be297a67
Add nvcc flags for cc62 () 2022-10-31 15:44:41 +08:00
AllentDan 502a5696c5
update () 2022-10-31 14:12:28 +08:00
q.yao f949414034
fix bev nms () 2022-10-28 16:08:37 +08:00
AllentDan a2d6323cf1
fix text recog () 2022-10-28 16:08:25 +08:00
q.yao 09add48a2a
[Fix] Fix reppoints TensorRT support. ()
* Fix reppoints

* update todo

* typo fix
2022-10-27 16:39:27 +08:00
RunningLeon 197a7ad425
fix efficientnet from mmcls ()
* fix efficientnet from mmcls

* update
2022-10-27 16:20:39 +08:00
Li Zhang f2be2abeb5
[Feature] Add `Cond` node and pose tracker demo ()
* add Cond node

* WIP PoseTracker

* fix pose tracker

* minor fix

* simplify design

* add timing

* sync

* visualize

* remove file check
2022-10-27 14:52:04 +08:00
lansfair b6f8c1c50d
Add mmseg configs for compatibility with Fast-SCNN model in ncnn backend ()
* mmdeploy summer camp

* fix lint

* add note

* add note

* add a comment in interpolate.py

Co-authored-by: root <root@LAPTOP-A20M40H8.localdomain>
2022-10-27 10:36:26 +08:00
lvhan028 2d35c9ab35
fix test_windows_onnruntime workflow error in circleci () 2022-10-26 19:35:59 +08:00
lvhan028 f051a31e0f
make onnxruntime(gpu) available in SDK () 2022-10-26 19:23:42 +08:00
lvhan028 579fd9ab8d
update suported backend logos () 2022-10-26 14:32:11 +08:00
q.yao 1323ffcb50
[Refactor] Ease rewriter import ()
* Ease rewriter import

* remove import

* add sar_encoder

* recover mha
2022-10-25 10:45:03 +08:00
lvhan028 692f535702
fix side effect caused by PR1235 () 2022-10-24 14:15:26 +08:00
Xu Lin c82a7ac89e
add MMYOLO desc in README () 2022-10-24 11:37:11 +08:00
q.yao 4a150e5e1b
update API for TensorRT8.4 () 2022-10-24 10:52:18 +08:00
AllentDan 114b0b8238
remove imports ()
* remove imports

* update doc

* detailed docstring

* rephrase
2022-10-24 10:45:52 +08:00
lvhan028 4c872a41c3
tell batch inference demos and single image inference demos apart () 2022-10-19 19:42:22 +08:00
LiuYi-Up 27ac0ab002
[Docs] add support for vipnas ()
* add support for vipnas

* add some unit tests

Signed-off-by: LiuYi-Up <1150854440@qq.com>

* Update quantization.md

* resolve some conflicts in docs

Signed-off-by: LiuYi-Up <1150854440@qq.com>

* fix the mdformat

Signed-off-by: LiuYi-Up <1150854440@qq.com>

* fix the layer_norm.py & test_mmcv_cnn.py

Signed-off-by: LiuYi-Up <1150854440@qq.com>

Signed-off-by: LiuYi-Up <1150854440@qq.com>
2022-10-19 10:24:23 +08:00
AllentDan 3eb60ea584
[Feature] Add RKNN support to SDK ()
* add rknn_net [WIP]

* add cmake

* enable mmcls

* remove toTensor in SDK pipeline

* update doc

* translate to Chinese

* update doc and add tool-chain cmake

* use ::framework

* fix lint

* doc and print log

* data map

* refine install doc

* add rknpu2 workflow

* update gcc yaml

* better cmake file

* update doc link

* use vector instead of array

* better env variable

* use soft link

* release ctx

* name rule
2022-10-18 17:52:31 +08:00
Jiahao Sun 8e634059a1
[Feat] Support Monocular 3D Detection and FCOS3D Deployment ()
* add monodet task

* format monodet

* format monodet

* sort test_monocular_detection_model.py import

* add fcos3d deploy

* change doc support model & fcos3d UT

* fix test monodet UT bug & remove ONNXBEVNMS op
2022-10-18 11:23:39 +08:00
q.yao 954e0d2ca1
update symbolic rewriter ()
* update symbolic rewriter

* update ut
2022-10-18 11:21:54 +08:00
RunningLeon b20741352e
update reg () 2022-10-18 11:16:38 +08:00
OldDreamInWind 161fb01d73
[Feature]add edsr result && super-resolution ncnn-int8 config ()
* add edsr result && ncnn-int8 config

* fix lint error

* fix lint error

* fix lint error && update benchmark.md

* add EDSRx2 pytorch result

* update edsrx4 result in benchmark
2022-10-18 10:04:56 +08:00
SsTtOoNnEe dd7550a08d
Rewrite Conv2dAdaptiveOps for conversion of EfficientNet (static shape) ()
* Rewrite Conv2dAdaptiveOps for conversion of EfficientNet (static shape)

* Refactor codes and Add unit test

* Simplify codes

* update supported model configs in yaml

* update mmcls.yml

* resolve lint error

Co-authored-by: SenseTime Research Singapore <SENSETIME\research.sgres@sg0016000001u.domain.sensetime.com>
2022-10-17 16:20:30 +08:00
lvhan028 1c478393c0
update issue form ()
* update issue form

* udpate issue form

* udpate issue form

* udpate issue form

* udpate issue form
2022-10-17 15:26:02 +08:00
tpoisonooo ace44ae9d9
improvement(scripts): cross build aarch64 ()
* udpate

* update

* CI(scripts): add auto cross build aarch64

* docs(scripts): add zh_cn doc

* docs(scripts): update

* docs(scripts): update

* fix(tools): update

* docs(zh_cn): update

* fix(scripts): remove gcc-7

* docs(scripts): update result

* udpate

* fix(tools): remove useless option

* docs(en): typo

* Update cross_build_aarch64.md

* Update cross_build_aarch64.md

* fix(tools/scripts): review advices

* fix(tools/scripts): update

* fix(cmake): remove useless option

* Update aarch64-linux-gnu.cmake
2022-10-17 11:15:29 +08:00
927 changed files with 36907 additions and 12346 deletions
.circleci
configs
_base_/backends

View File

@ -6,11 +6,7 @@ cd mmdeploy
MMDEPLOY_DIR=$(pwd)
mkdir -p build && cd build
cmake .. -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_TEST=ON -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_BUILD_SDK_CXX_API=ON -DMMDEPLOY_BUILD_SDK_CSHARP_API=ON \
-DMMDEPLOY_BUILD_EXAMPLES=ON -DMMDEPLOY_BUILD_SDK_CSHARP_API=ON \
-DMMDEPLOY_TARGET_DEVICES="$1" -DMMDEPLOY_TARGET_BACKENDS="$2" "${ARGS[@]:2}"
make -j$(nproc) && make install
cd install/example
mkdir -p build
cd build
cmake ../cpp -DMMDeploy_DIR="$MMDEPLOY_DIR"/build/install/lib/cmake/MMDeploy "${ARGS[@]:2}" && make -j$(nproc)

View File

@ -1,3 +1,3 @@
Invoke-WebRequest -Uri https://download.openmmlab.com/mmdeploy/library/opencv-4.5.5.zip -OutFile opencv.zip
Invoke-WebRequest -Uri https://github.com/irexyc/mmdeploy-ci-resource/releases/download/opencv/opencv-win-amd64-4.5.5-vc16.zip -OutFile opencv.zip
Expand-Archive opencv.zip .
Move-Item opencv-4.5.5 opencv

View File

@ -74,7 +74,7 @@ commands:
- run:
name: Install mmcv-full
command: |
python -m pip install opencv-python==4.5.4.60
python -m pip install opencv-python==4.5.4.60 opencv-contrib-python==4.5.4.60 opencv-python-headless==4.5.4.60
python -m pip install mmcv-full==<< parameters.version >> -f https://download.openmmlab.com/mmcv/dist/cpu/torch<< parameters.torch >>/index.html
install_mmcv_cuda:
parameters:
@ -91,7 +91,7 @@ commands:
- run:
name: Install mmcv-full
command: |
python -m pip install opencv-python==4.5.4.60
python -m pip install opencv-python==4.5.4.60 opencv-contrib-python==4.5.4.60 opencv-python-headless==4.5.4.60
python -m pip install mmcv-full==<< parameters.version >> -f https://download.openmmlab.com/mmcv/dist/<< parameters.cuda >>/torch<< parameters.torch >>/index.html
install_mmdeploy:
description: "Install MMDeploy"
@ -216,19 +216,12 @@ jobs:
-DMMDEPLOY_BUILD_SDK=ON `
-DMMDEPLOY_BUILD_TEST=ON `
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON `
-DMMDEPLOY_BUILD_SDK_CXX_API=ON `
-DMMDEPLOY_BUILD_EXAMPLES=ON `
-DMMDEPLOY_BUILD_SDK_CSHARP_API=ON `
-DMMDEPLOY_TARGET_BACKENDS="ort" `
-DOpenCV_DIR="$env:OPENCV_PACKAGE_DIR"
cmake --build . --config Release -- /m
cmake --install . --config Release
cd install/example
mkdir build -ErrorAction SilentlyContinue
cd build
cmake ../cpp -G "Visual Studio 16 2019" -A x64 -T v142 `
-DMMDeploy_DIR="$env:MMDEPLOY_DIR/build/install/lib/cmake/MMDeploy" `
-DOpenCV_DIR="$env:OPENCV_PACKAGE_DIR"
cmake --build . --config Release -- /m
- install_mmdeploy
- install_model_converter_req
- perform_model_converter_ut
@ -280,7 +273,7 @@ jobs:
- run:
name: Inference model by SDK
command: |
mmdeploy/build/install/example/build/image_classification cpu mmdeploy-models/mmcls/onnxruntime mmclassification/demo/demo.JPEG
./mmdeploy/build/bin/image_classification cpu mmdeploy-models/mmcls/onnxruntime mmclassification/demo/demo.JPEG
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows

View File

@ -1,2 +1,3 @@
cann
CANN
nd

View File

@ -1,6 +1,7 @@
name: Bug report
description: Create a report to help us improve
name: 🐞 Bug report
description: Create a report to help us reproduce and fix the bug
title: "[Bug] "
labels: ['Bug']
body:
- type: checkboxes
@ -52,5 +53,3 @@ body:
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
Thanks for your bug report. We appreciate it a lot.
labels: ['Bug']

View File

@ -1,11 +1,15 @@
name: Feature request
name: 🚀 Feature request
description: Suggest an idea for this project
title: "[Feature] "
body:
- type: markdown
attributes:
value: >
## Describe the feature
value: |
We strongly appreciate you creating a PR to implement this feature [here](https://github.com/open-mmlab/mmdeploy/pulls)!
If you need our help, please fill in as much of the following form as you're able to.
**The less clear the description, the longer it will take to solve it.**
- type: textarea
attributes:
label: Motivation

View File

@ -0,0 +1,23 @@
name: 📚 Documentation
description: Report an issue related to the documentation.
labels: "kind/doc,status/unconfirmed"
title: "[Docs] "
body:
- type: textarea
attributes:
label: 📚 The doc issue
description: >
A clear and concise description the issue.
validations:
required: true
- type: textarea
attributes:
label: Suggest a potential alternative/fix
description: >
Tell us how we could improve the documentation in this regard.
- type: markdown
attributes:
value: >
Thanks for contributing 🎉!

View File

@ -1,6 +1,12 @@
blank_issues_enabled: false
contact_links:
- name: Common Issues
- name: 💥 FAQ
url: https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/faq.md
about: Check if your issue already has solutions
- name: 💬 Forum
url: https://github.com/open-mmlab/mmdeploy/discussions
about: Ask general usage questions and discuss with other MMDeploy community members
- name: 🌐 Explore OpenMMLab
url: https://openmmlab.com/
about: Get know more about OpenMMLab

View File

@ -1,7 +0,0 @@
---
name: General questions
about: Ask general questions to get help
title: ''
labels: ''
assignees: ''
---

32
.github/release.yml vendored 100644
View File

@ -0,0 +1,32 @@
changelog:
categories:
- title: 🚀 Features
labels:
- feature
- enhancement
- title: 💥 Improvements
labels:
- improvement
- title: 🐞 Bug fixes
labels:
- bug
- Bug:P0
- Bug:P1
- Bug:P2
- Bug:P3
- title: 📚 Documentations
labels:
- documentation
- title: 🌐 Other
labels:
- '*'
exclude:
labels:
- feature
- enhancement
- bug
- documentation
- Bug:P0
- Bug:P1
- Bug:P2
- Bug:P3

View File

@ -47,6 +47,13 @@ PARAMS = [
'configs': [
'https://media.githubusercontent.com/media/hanrui1sensetime/mmdeploy-javaapi-testdata/master/litehrnet.tar' # noqa: E501
]
},
{
'task':
'RotatedDetection',
'configs': [
'https://media.githubusercontent.com/media/hanrui1sensetime/mmdeploy-javaapi-testdata/master/gliding-vertex.tar' # noqa: E501
]
}
]

View File

@ -16,15 +16,18 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build_sdk_demo:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
strategy:
matrix:
python-version: [3.7]
steps:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Checkout repository
@ -37,9 +40,7 @@ jobs:
run: |
sudo apt update
sudo apt install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev libc++1-9 libc++abi1-9
sudo add-apt-repository ppa:ignaciovizzo/opencv3-nonfree
sudo apt install libopencv-dev
pkg-config --libs opencv
- name: Install Ascend Toolkit
run: |
mkdir -p $GITHUB_WORKSPACE/Ascend
@ -50,5 +51,5 @@ jobs:
mkdir -p build && pushd build
source $GITHUB_WORKSPACE/Ascend/ascend-toolkit/set_env.sh
export LD_LIBRARY_PATH=$GITHUB_WORKSPACE/Ascend/ascend-toolkit/latest/runtime/lib64/stub:$LD_LIBRARY_PATH
cmake .. -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_SHARED_LIBS=ON -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_SDK_PYTHON_API=OFF -DMMDEPLOY_TARGET_DEVICES=cpu -DMMDEPLOY_BUILD_EXAMPLES=ON -DMMDEPLOY_TARGET_BACKENDS=acl -DMMDEPLOY_CODEBASES=all
cmake .. -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_EXAMPLES=ON -DMMDEPLOY_TARGET_BACKENDS=acl
make install -j4

View File

@ -39,22 +39,21 @@ jobs:
wget https://github.com/irexyc/mmdeploy-ci-resource/releases/download/libtorch/libtorch-osx-arm64-1.8.0.tar.gz
mkdir $GITHUB_WORKSPACE/libtorch-install
tar xf libtorch-osx-arm64-1.8.0.tar.gz -C $GITHUB_WORKSPACE/libtorch-install
- name: build
- name: build-static-lib
run: |
mkdir build && cd build
cmake .. -DCMAKE_OSX_ARCHITECTURES="arm64" \
-DCMAKE_SYSTEM_PROCESSOR="arm64" \
-DMMDEPLOY_BUILD_SDK=ON \
-DMMDEPLOY_TARGET_DEVICES="cpu" \
-DMMDEPLOY_CODEBASES=all \
-DOpenCV_DIR=$GITHUB_WORKSPACE/opencv-install/lib/cmake/opencv4 \
-DTorch_DIR=$GITHUB_WORKSPACE/libtorch-install/share/cmake/Torch \
-DMMDEPLOY_TARGET_BACKENDS="coreml" \
-DMMDEPLOY_BUILD_EXAMPLES=ON \
-DMMDEPLOY_BUILD_SDK_MONOLITHIC=OFF \
-DMMDEPLOY_SHARED_LIBS=OFF
cmake --build . -j 3
cmake --build . --target install
- name: build-shared
- name: build-monolithic-lib
run: |
mkdir build-shared && cd build-shared
cmake .. -DCMAKE_OSX_ARCHITECTURES="arm64" \
@ -65,7 +64,8 @@ jobs:
-DOpenCV_DIR=$GITHUB_WORKSPACE/opencv-install/lib/cmake/opencv4 \
-DTorch_DIR=$GITHUB_WORKSPACE/libtorch-install/share/cmake/Torch \
-DMMDEPLOY_TARGET_BACKENDS="coreml" \
-DMMDEPLOY_BUILD_EXAMPLES=ON \
-DMMDEPLOY_SHARED_LIBS=ON
-DMMDEPLOY_BUILD_SDK_MONOLITHIC=ON \
-DMMDEPLOY_SHARED_LIBS=OFF \
-DMMDEPLOY_BUILD_EXAMPLES=ON
cmake --build . -j 3
cmake --build . --target install

View File

@ -2,11 +2,19 @@ name: backend-ncnn
on:
push:
branches:
- main
- master
- dev-1.x
paths-ignore:
- "demo/**"
- "tools/**"
pull_request:
branches:
- main
- master
- dev-1.x
paths-ignore:
- "demo/**"
- "tools/**"
@ -16,9 +24,12 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
test_onnx2ncnn:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
strategy:
matrix:
python-version: [3.7]
@ -28,7 +39,7 @@ jobs:
with:
submodules: 'recursive'
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install unittest dependencies
@ -71,12 +82,12 @@ jobs:
with:
submodules: 'recursive'
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install mmdeploy
run: |
python3 tools/scripts/build_ubuntu_x64_ncnn.py
python3 -m pip install torch==1.8.2 torchvision==0.9.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cpu
python3 -m pip install mmcv-full==1.5.1 -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.8.0/index.html
python3 -c 'import mmdeploy.apis.ncnn as ncnn_api; assert ncnn_api.is_available() and ncnn_api.is_custom_ops_available()'
python3 tools/scripts/build_ubuntu_x64_ncnn.py 8
python3 -c 'import mmdeploy.apis.ncnn as ncnn_api; assert ncnn_api.is_available(with_custom_ops=True)'

View File

@ -16,6 +16,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
script_install:
runs-on: ubuntu-20.04
@ -28,15 +31,15 @@ jobs:
with:
submodules: 'recursive'
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install mmdeploy
run: |
python3 tools/scripts/build_ubuntu_x64_ort.py
python3 -m pip install torch==1.8.2 torchvision==0.9.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cpu
python3 -m pip install mmcv-full==1.5.1 -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.8.0/index.html
python3 -c 'import mmdeploy.apis.onnxruntime as ort_api; assert ort_api.is_available() and ort_api.is_custom_ops_available()'
python3 tools/scripts/build_ubuntu_x64_ort.py 8
python3 -c 'import mmdeploy.apis.onnxruntime as ort_api; assert ort_api.is_available(with_custom_ops=True)'
- name: test mmcls full pipeline
run: |
pip install openmim

View File

@ -16,24 +16,20 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
script_install:
runs-on: ubuntu-18.04
strategy:
matrix:
python-version: [3.7]
runs-on: ubuntu-20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install mmdeploy
run: |
python3 tools/scripts/build_ubuntu_x64_pplnn.py
python3 -m pip install torch==1.8.2 torchvision==0.9.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cpu
python3 -m pip install mmcv-full==1.5.1 -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.8.0/index.html
python3 tools/scripts/build_ubuntu_x64_pplnn.py 8
python3 -c 'import mmdeploy.apis.pplnn as pplnn_api; assert pplnn_api.is_available()'

View File

@ -0,0 +1,47 @@
name: backend-rknn
on:
push:
paths:
- "csrc/**"
- "demo/csrc/**"
- "CMakeLists.txt"
pull_request:
paths-ignore:
- "csrc/**"
- "demo/csrc/**"
- "CMakeLists.txt"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build_rknpu2:
runs-on: ubuntu-20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: update
run: sudo apt update
- name: cross compile
run: |
sh -x tools/scripts/ubuntu_cross_build_rknn.sh rk3588
build_rknpu:
runs-on: ubuntu-20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: update
run: sudo apt update
- name: cross compile
run: |
sh -x tools/scripts/ubuntu_cross_build_rknn.sh rv1126

View File

@ -16,9 +16,12 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build_sdk_demo:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
@ -31,9 +34,7 @@ jobs:
sudo apt install wget libprotobuf-dev protobuf-compiler
sudo apt update
sudo apt install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev libc++1-9 libc++abi1-9
sudo add-apt-repository ppa:ignaciovizzo/opencv3-nonfree
sudo apt install libopencv-dev
pkg-config --libs opencv
- name: Install snpe
run: |
wget https://media.githubusercontent.com/media/tpoisonooo/mmdeploy_snpe_testdata/main/snpe-1.59.tar.gz
@ -47,7 +48,7 @@ jobs:
export SNPE_ROOT=/home/runner/work/mmdeploy/mmdeploy/snpe-1.59.0.3230
export LD_LIBRARY_PATH=${SNPE_ROOT}/lib/x86_64-linux-clang:${LD_LIBRARY_PATH}
export MMDEPLOY_SNPE_X86_CI=1
cmake .. -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_SHARED_LIBS=ON -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_SDK_PYTHON_API=OFF -DMMDEPLOY_TARGET_DEVICES=cpu -DMMDEPLOY_TARGET_BACKENDS=snpe -DMMDEPLOY_CODEBASES=all
cmake .. -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_TARGET_BACKENDS=snpe
make -j2
make install
pushd install/example

View File

@ -16,9 +16,12 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
script_install:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
strategy:
matrix:
python-version: [3.7]
@ -28,9 +31,9 @@ jobs:
with:
submodules: 'recursive'
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install mmdeploy
run: |
python3 tools/scripts/build_ubuntu_x64_torchscript.py
python3 tools/scripts/build_ubuntu_x64_torchscript.py 8

View File

@ -0,0 +1,44 @@
name: backend-tvm
on:
push:
paths-ignore:
- "demo/**"
- "tools/**"
pull_request:
paths-ignore:
- "demo/**"
- "tools/**"
- "docs/**"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
script_install:
runs-on: ubuntu-20.04
strategy:
matrix:
python-version: [3.7]
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install mmdeploy
run: |
python3 -m pip install torch==1.8.2 torchvision==0.9.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cpu
python3 -m pip install mmcv-full==1.5.1 -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.8.0/index.html
python3 -m pip install decorator psutil scipy attrs tornado pytest
python3 tools/scripts/build_ubuntu_x64_tvm.py 8
source ~/mmdeploy.env
python3 -c 'import mmdeploy.apis.tvm as tvm_api; assert tvm_api.is_available()'

View File

@ -20,12 +20,14 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build_cpu_model_convert:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
strategy:
matrix:
python-version: [3.7]
torch: [1.8.0, 1.9.0]
mmcv: [1.4.2]
include:
@ -36,23 +38,23 @@ jobs:
torch_version: torch1.9
torchvision: 0.10.0
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- uses: actions/checkout@v3
- name: Install PyTorch
run: pip install torch==${{matrix.torch}}+cpu torchvision==${{matrix.torchvision}}+cpu -f https://download.pytorch.org/whl/torch_stable.html
run: |
python -m pip install --upgrade pip
python -V
python -m pip install torch==${{matrix.torch}}+cpu torchvision==${{matrix.torchvision}}+cpu -f https://download.pytorch.org/whl/torch_stable.html
- name: Install MMCV
run: |
pip install mmcv-full==${{matrix.mmcv}} -f https://download.openmmlab.com/mmcv/dist/cpu/${{matrix.torch_version}}/index.html
python -m pip install mmcv-full==${{matrix.mmcv}} -f https://download.openmmlab.com/mmcv/dist/cpu/${{matrix.torch_version}}/index.html
python -c 'import mmcv; print(mmcv.__version__)'
- name: Install unittest dependencies
run: |
pip install -r requirements.txt
pip install -U numpy
python -m pip install -U numpy
python -m pip install rapidfuzz==2.15.1
python -m pip install -r requirements.txt
- name: Build and install
run: rm -rf .eggs && pip install -e .
run: rm -rf .eggs && python -m pip install -e .
- name: Run python unittests and generate coverage report
run: |
coverage run --branch --source mmdeploy -m pytest -rsE tests
@ -60,7 +62,7 @@ jobs:
coverage report -m
build_cpu_sdk:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
@ -70,16 +72,18 @@ jobs:
run: sudo apt update
- name: gcc-multilib
run: |
sudo apt install gcc-multilib g++-multilib wget libprotobuf-dev protobuf-compiler
sudo apt update
sudo apt install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev libc++1-9 libc++abi1-9
sudo add-apt-repository ppa:ignaciovizzo/opencv3-nonfree
sudo apt install libopencv-dev lcov wget
pkg-config --libs opencv
sudo apt install libopencv-dev lcov wget -y
- name: Build and run SDK unit test without backend
run: |
mkdir -p build && pushd build
cmake .. -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_CODEBASES=all -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_SDK_PYTHON_API=OFF -DMMDEPLOY_TARGET_DEVICES=cpu -DMMDEPLOY_COVERAGE=ON -DMMDEPLOY_BUILD_TEST=ON
cmake .. \
-DMMDEPLOY_CODEBASES=all \
-DMMDEPLOY_BUILD_SDK=ON \
-DMMDEPLOY_BUILD_SDK_PYTHON_API=OFF \
-DMMDEPLOY_TARGET_DEVICES=cpu \
-DMMDEPLOY_COVERAGE=ON \
-DMMDEPLOY_BUILD_TEST=ON
make -j2
mkdir -p mmdeploy_test_resources/transform
cp ../tests/data/tiger.jpeg mmdeploy_test_resources/transform/
@ -88,15 +92,32 @@ jobs:
ls -lah coverage.info
cp coverage.info ../
cross_build_aarch64:
runs-on: ubuntu-20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: update
run: sudo apt update
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.8
- name: gcc-multilib
run: |
sh -x tools/scripts/ubuntu_cross_build_aarch64.sh
build_cuda102:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
container:
image: pytorch/pytorch:1.9.0-cuda10.2-cudnn7-devel
env:
FORCE_CUDA: 1
strategy:
matrix:
python-version: [3.7]
python-version: [3.8]
torch: [1.9.0+cu102]
mmcv: [1.4.2]
include:
@ -104,26 +125,20 @@ jobs:
torch_version: torch1.9
torchvision: 0.10.0+cu102
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- uses: actions/checkout@v3
- name: Install system dependencies
run: |
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A4B469963BF863CC
apt-get update && apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev python${{matrix.python-version}}-dev
apt-get clean
rm -rf /var/lib/apt/lists/*
apt-get update && apt-get install -y git
- name: Install PyTorch
run: python -m pip install torch==${{matrix.torch}} torchvision==${{matrix.torchvision}} -f https://download.pytorch.org/whl/torch_stable.html
- name: Install dependencies
run: |
python -V
python -m pip install -U pip
python -m pip install mmcv-full==${{matrix.mmcv}} -f https://download.openmmlab.com/mmcv/dist/cu102/${{matrix.torch_version}}/index.html
CFLAGS=`python -c 'import sysconfig;print("-I"+sysconfig.get_paths()["include"])'` python -m pip install -r requirements.txt
pip install -U pycuda
python -m pip install -U numpy
python -m pip install -r requirements.txt
python -m pip install rapidfuzz==2.15.1
- name: Build and install
run: |
rm -rf .eggs && python -m pip install -e .
@ -135,41 +150,31 @@ jobs:
coverage report -m
build_cuda111:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
container:
image: pytorch/pytorch:1.8.0-cuda11.1-cudnn8-devel
strategy:
matrix:
python-version: [3.7]
python-version: [3.8]
torch: [1.8.0+cu111]
mmcv: [1.4.2]
include:
- torch: 1.8.0+cu111
torch_version: torch1.8
torchvision: 0.9.0+cu111
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- uses: actions/checkout@v3
- name: Install system dependencies
run: |
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A4B469963BF863CC
apt-get update && apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev python${{matrix.python-version}}-dev
apt-get clean
rm -rf /var/lib/apt/lists/*
- name: Install PyTorch
run: python -m pip install torch==${{matrix.torch}} torchvision==${{matrix.torchvision}} -f https://download.pytorch.org/whl/torch_stable.html
apt-get update && apt-get install -y git
- name: Install dependencies
run: |
python -V
python -m pip install -U pip
python -m pip install mmcv-full==${{matrix.mmcv}} -f https://download.openmmlab.com/mmcv/dist/cu111/${{matrix.torch_version}}/index.html
CFLAGS=`python -c 'import sysconfig;print("-I"+sysconfig.get_paths()["include"])'` python -m pip install -r requirements.txt
pip install -U pycuda
python -m pip install -U numpy
python -m pip install -r requirements.txt
python -m pip install rapidfuzz==2.15.1
- name: Build and install
run: |
rm -rf .eggs && python -m pip install -e .

View File

@ -14,16 +14,19 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
test_java_api:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: Set up Python 3.7
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: 3.7
- name: Install unittest dependencies

View File

@ -2,13 +2,16 @@ name: lint
on: [push, pull_request]
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Python 3.7
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: 3.7
- name: Install pre-commit hook

View File

@ -17,6 +17,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build_riscv64_gcc:
runs-on: ubuntu-20.04
@ -45,12 +48,9 @@ jobs:
cmake .. \
-DCMAKE_TOOLCHAIN_FILE=../cmake/toolchains/riscv64-linux-gnu.cmake \
-DMMDEPLOY_BUILD_SDK=ON \
-DMMDEPLOY_SHARED_LIBS=ON \
-DMMDEPLOY_BUILD_EXAMPLES=ON \
-DMMDEPLOY_TARGET_DEVICES="cpu" \
-DMMDEPLOY_TARGET_BACKENDS="ncnn" \
-Dncnn_DIR=$GITHUB_WORKSPACE/ncnn-install/lib/cmake/ncnn/ \
-DMMDEPLOY_CODEBASES=all \
-DOpenCV_DIR=$GITHUB_WORKSPACE/opencv-install/lib/cmake/opencv4
make -j$(nproc)
make install

277
.github/workflows/prebuild.yml vendored 100644
View File

@ -0,0 +1,277 @@
name: prebuild
on:
push:
branches:
- main
- dev-1.x
- master
paths:
- "mmdeploy/version.py"
permissions: write-all
jobs:
linux_build:
runs-on: [self-hosted, linux-3090]
container:
image: openmmlab/mmdeploy:manylinux2014_x86_64-cuda11.3
options: "--gpus=all --ipc=host"
volumes:
- /data2/actions-runner/prebuild:/__w/mmdeploy/prebuild
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: recursive
- name: Get mmdeploy version
run: |
export MMDEPLOY_VERSION=$(python3 -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$MMDEPLOY_VERSION" >> $GITHUB_ENV
echo "OUTPUT_DIR=$MMDEPLOY_VERSION-$GITHUB_RUN_ID" >> $GITHUB_ENV
- name: Build MMDeploy
run: |
source activate mmdeploy-3.6
pip install pyyaml packaging setuptools wheel
mkdir pack; cd pack
python ../tools/package_tools/generate_build_config.py --backend 'trt;ort' \
--system linux --output config.yml --build-mmdeploy
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Build sdk cpu backend
run: |
source activate mmdeploy-3.6
cd pack
python ../tools/package_tools/generate_build_config.py --backend 'ort' \
--system linux --output config.yml --device cpu --build-sdk --build-sdk-monolithic \
--build-sdk-python --sdk-dynamic-net
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Build sdk cuda backend
run: |
source activate mmdeploy-3.6
cd pack
python ../tools/package_tools/generate_build_config.py --backend 'ort;trt' \
--system linux --output config.yml --device cuda --build-sdk --build-sdk-monolithic \
--build-sdk-python --sdk-dynamic-net --onnxruntime-dir=$ONNXRUNTIME_GPU_DIR
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Move artifact
run: |
mkdir -p /__w/mmdeploy/prebuild/$OUTPUT_DIR
cp -r pack/* /__w/mmdeploy/prebuild/$OUTPUT_DIR
linux_build_cxx11abi:
runs-on: [self-hosted, linux-3090]
container:
image: openmmlab/mmdeploy:build-ubuntu16.04-cuda11.3
options: "--gpus=all --ipc=host"
volumes:
- /data2/actions-runner/prebuild:/__w/mmdeploy/prebuild
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: recursive
- name: Get mmdeploy version
run: |
export MMDEPLOY_VERSION=$(python3 -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$MMDEPLOY_VERSION" >> $GITHUB_ENV
echo "OUTPUT_DIR=$MMDEPLOY_VERSION-$GITHUB_RUN_ID" >> $GITHUB_ENV
- name: Build sdk cpu backend
run: |
mkdir pack; cd pack
python ../tools/package_tools/generate_build_config.py --backend 'ort' \
--system linux --output config.yml --device cpu --build-sdk --build-sdk-monolithic \
--sdk-dynamic-net --cxx11abi
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Build sdk cuda backend
run: |
cd pack
python ../tools/package_tools/generate_build_config.py --backend 'ort;trt' \
--system linux --output config.yml --device cuda --build-sdk --build-sdk-monolithic \
--sdk-dynamic-net --cxx11abi --onnxruntime-dir=$ONNXRUNTIME_GPU_DIR --cudnn-dir /usr
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Move artifact
run: |
mkdir -p /__w/mmdeploy/prebuild/$OUTPUT_DIR
cp -r pack/* /__w/mmdeploy/prebuild/$OUTPUT_DIR
linux_test:
runs-on: [self-hosted, linux-3090]
needs:
- linux_build
- linux_build_cxx11abi
container:
image: openmmlab/mmdeploy:ubuntu20.04-cuda11.3
options: "--gpus=all --ipc=host"
volumes:
- /data2/actions-runner/prebuild:/__w/mmdeploy/prebuild
- /data2/actions-runner/testmodel:/__w/mmdeploy/testmodel
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Get mmdeploy version
run: |
export MMDEPLOY_VERSION=$(python3 -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$MMDEPLOY_VERSION" >> $GITHUB_ENV
echo "OUTPUT_DIR=$MMDEPLOY_VERSION-$GITHUB_RUN_ID" >> $GITHUB_ENV
- name: Test python
run: |
cd /__w/mmdeploy/prebuild/$OUTPUT_DIR
bash $GITHUB_WORKSPACE/tools/package_tools/test/test_sdk_python.sh
- name: Test c/cpp
run: |
cd /__w/mmdeploy/prebuild/$OUTPUT_DIR
bash $GITHUB_WORKSPACE/tools/package_tools/test/test_sdk.sh
linux_upload:
runs-on: [self-hosted, linux-3090]
if: startsWith(github.ref, 'refs/tags/')
environment: 'prod'
needs: linux_test
env:
PREBUILD_DIR: /data2/actions-runner/prebuild
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Get mmdeploy version
run: |
export MMDEPLOY_VERSION=$(python3 -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$MMDEPLOY_VERSION" >> $GITHUB_ENV
echo "OUTPUT_DIR=$MMDEPLOY_VERSION-$GITHUB_RUN_ID" >> $GITHUB_ENV
- name: Upload mmdeploy
run: |
cd $PREBUILD_DIR/$OUTPUT_DIR/mmdeploy
pip install twine
# twine upload * --repository testpypi -u __token__ -p ${{ secrets.test_pypi_password }}
twine upload * -u __token__ -p ${{ secrets.pypi_password }}
- name: Upload mmdeploy_runtime
run: |
cd $PREBUILD_DIR/$OUTPUT_DIR/mmdeploy_runtime
# twine upload * --repository testpypi -u __token__ -p ${{ secrets.test_pypi_password }}
twine upload * -u __token__ -p ${{ secrets.pypi_password }}
- name: Zip mmdeploy sdk
run: |
cd $PREBUILD_DIR/$OUTPUT_DIR/sdk
for folder in *
do
tar czf $folder.tar.gz $folder
done
- name: Upload mmdeploy sdk
uses: softprops/action-gh-release@v1
with:
files: |
$PREBUILD_DIR/$OUTPUT_DIR/sdk/*.tar.gz
windows_build:
runs-on: [self-hosted, win10-3080]
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: recursive
- name: Get mmdeploy version
run: |
conda activate mmdeploy-3.8
$env:MMDEPLOY_VERSION=(python -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $env:MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$env:MMDEPLOY_VERSION" >> $env:GITHUB_ENV
echo "OUTPUT_DIR=$env:MMDEPLOY_VERSION-$env:GITHUB_RUN_ID" >> $env:GITHUB_ENV
- name: Build MMDeploy
run: |
. D:\DEPS\cienv\prebuild_gpu_env.ps1
conda activate mmdeploy-3.6
mkdir pack; cd pack
python ../tools/package_tools/generate_build_config.py --backend 'trt;ort' `
--system windows --output config.yml --build-mmdeploy
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Build sdk cpu backend
run: |
. D:\DEPS\cienv\prebuild_cpu_env.ps1
conda activate mmdeploy-3.6
cd pack
python ../tools/package_tools/generate_build_config.py --backend 'ort' `
--system windows --output config.yml --device cpu --build-sdk --build-sdk-monolithic `
--build-sdk-python --sdk-dynamic-net
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Build sdk cuda backend
run: |
. D:\DEPS\cienv\prebuild_gpu_env.ps1
conda activate mmdeploy-3.6
cd pack
python ../tools/package_tools/generate_build_config.py --backend 'ort;trt' `
--system windows --output config.yml --device cuda --build-sdk --build-sdk-monolithic `
--build-sdk-python --sdk-dynamic-net
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Move artifact
run: |
New-Item "D:/DEPS/ciartifact/$env:OUTPUT_DIR" -ItemType Directory -Force
Move-Item pack/* "D:/DEPS/ciartifact/$env:OUTPUT_DIR"
windows_test:
runs-on: [self-hosted, win10-3080]
needs: windows_build
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Get mmdeploy version
run: |
conda activate mmdeploy-3.8
$env:MMDEPLOY_VERSION=(python -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $env:MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$env:MMDEPLOY_VERSION" >> $env:GITHUB_ENV
echo "OUTPUT_DIR=$env:MMDEPLOY_VERSION-$env:GITHUB_RUN_ID" >> $env:GITHUB_ENV
- name: Test python
run: |
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR"
. D:\DEPS\cienv\prebuild_cpu_env.ps1
conda activate ci-test
& "$env:GITHUB_WORKSPACE/tools/package_tools/test/test_sdk_python.ps1"
- name: Test c/cpp
run: |
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR"
. D:\DEPS\cienv\prebuild_cpu_env.ps1
& "$env:GITHUB_WORKSPACE/tools/package_tools/test/test_sdk.ps1"
windows_upload:
runs-on: [self-hosted, win10-3080]
if: startsWith(github.ref, 'refs/tags/')
environment: 'prod'
needs: windows_test
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Get mmdeploy version
run: |
conda activate mmdeploy-3.8
$env:MMDEPLOY_VERSION=(python -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $env:MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$env:MMDEPLOY_VERSION" >> $env:GITHUB_ENV
echo "OUTPUT_DIR=$env:MMDEPLOY_VERSION-$env:GITHUB_RUN_ID" >> $env:GITHUB_ENV
- name: Upload mmdeploy
run: |
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR/mmdeploy"
conda activate mmdeploy-3.8
# twine upload * --repository testpypi -u __token__ -p ${{ secrets.test_pypi_password }}
twine upload * -u __token__ -p ${{ secrets.pypi_password }}
- name: Upload mmdeploy_runtime
run: |
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR/mmdeploy_runtime"
conda activate mmdeploy-3.8
# twine upload * --repository testpypi -u __token__ -p ${{ secrets.test_pypi_password }}
twine upload * -u __token__ -p ${{ secrets.pypi_password }}
- name: Zip mmdeploy sdk
run: |
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR/sdk"
$folders = $(ls).Name
foreach ($folder in $folders) {
Compress-Archive -Path $folder -DestinationPath "$folder.zip"
}
- name: Upload mmdeploy sdk
uses: softprops/action-gh-release@v1
with:
files: |
D:/DEPS/ciartifact/$env:OUTPUT_DIR/sdk/*.zip

View File

@ -16,9 +16,12 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
test_ncnn_PTQ:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
container:
image: pytorch/pytorch:1.8.0-cuda11.1-cudnn8-devel
@ -33,45 +36,31 @@ jobs:
torchvision: 0.9.0+cu111
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- uses: actions/checkout@v3
- name: Install system dependencies
run: |
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A4B469963BF863CC
apt-get update && apt-get install -y wget ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev python${{matrix.python-version}}-dev
apt-get clean
rm -rf /var/lib/apt/lists/*
- name: Install PyTorch
run: python -m pip install torch==${{matrix.torch}} torchvision==${{matrix.torchvision}} -f https://download.pytorch.org/whl/torch_stable.html
apt-get update && apt-get install -y git wget
- name: Install dependencies
run: |
python -V
python -m pip install -U pip
python -m pip install mmcv-full==${{matrix.mmcv}} -f https://download.openmmlab.com/mmcv/dist/cu111/${{matrix.torch_version}}/index.html
CFLAGS=`python -c 'import sysconfig;print("-I"+sysconfig.get_paths()["include"])'` python -m pip install -r requirements.txt
python -m pip install -U numpy
python -m pip install -r requirements.txt
python -m pip install rapidfuzz==2.15.1
- name: Install mmcls
run: |
cd ~
git clone https://github.com/open-mmlab/mmclassification.git
git clone -b v0.23.0 --depth 1 https://github.com/open-mmlab/mmclassification.git
cd mmclassification
git checkout v0.23.0
python3 -m pip install -e .
cd -
- name: Install ppq
run: |
cd ~
python -m pip install protobuf==3.20.0
git clone https://github.com/openppl-public/ppq
git clone -b v0.6.6 --depth 1 https://github.com/openppl-public/ppq
cd ppq
git checkout edbecf44c7b203515640e4f4119c000a1b66b33a
python3 -m pip install -r requirements.txt
python3 setup.py install
cd -
- name: Run tests
run: |
echo $(pwd)
export PYTHONPATH=${PWD}/ppq:${PYTHONPATH}
python3 .github/scripts/quantize_to_ncnn.py

View File

@ -14,9 +14,12 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
test_rust_api:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
@ -24,7 +27,7 @@ jobs:
submodules: 'recursive'
- name: Set up Python 3.7
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: 3.7
- name: Install latest nightly Rust

32
.github/workflows/stale.yml vendored 100644
View File

@ -0,0 +1,32 @@
name: 'Close stale issues and PRs'
on:
schedule:
# check issue and pull request once at 01:30 a.m. every day
- cron: '30 1 * * *'
permissions:
contents: read
jobs:
stale:
permissions:
issues: write
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v7
with:
stale-issue-message: 'This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.'
stale-pr-message: 'This PR is marked as stale because there has been no activity in the past 45 days. It will be closed in 10 days if the stale label is not removed or if there is no further updates.'
close-issue-message: 'This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.'
close-pr-message: 'This PR is closed because it has been stale for 10 days. Please reopen this PR if you have any updates and want to keep contributing the code.'
# only issues/PRS with following labels are checked
any-of-labels: 'invalid, awaiting response, duplicate'
days-before-issue-stale: 7
days-before-pr-stale: 45
days-before-issue-close: 5
days-before-pr-close: 10
# automatically remove the stale label when the issues or the pull reqquests are updated or commented
remove-stale-when-updated: true
operations-per-run: 50

3
.gitignore vendored
View File

@ -164,3 +164,6 @@ service/snpe/grpc_cpp_plugin
csrc/mmdeploy/preprocess/elena/json
csrc/mmdeploy/preprocess/elena/cpu_kernel/*
csrc/mmdeploy/preprocess/elena/cuda_kernel/*
# c#
demo/csharp/*/Properties

12
.gitmodules vendored
View File

@ -1,9 +1,9 @@
[submodule "third_party/cub"]
path = third_party/cub
url = https://github.com/NVIDIA/cub.git
path = third_party/cub
url = https://github.com/NVIDIA/cub.git
[submodule "third_party/pybind11"]
path = third_party/pybind11
url = https://github.com/pybind/pybind11.git
path = third_party/pybind11
url = https://github.com/pybind/pybind11.git
[submodule "third_party/spdlog"]
path = third_party/spdlog
url = https://github.com/gabime/spdlog.git
path = third_party/spdlog
url = https://github.com/gabime/spdlog.git

View File

@ -3,9 +3,11 @@ repos:
rev: 4.0.1
hooks:
- id: flake8
args: ["--exclude=*/client/inference_pb2.py,*/client/inference_pb2_grpc.py"]
args: ["--exclude=*/client/inference_pb2.py, \
*/client/inference_pb2_grpc.py, \
tools/package_tools/packaging/setup.py"]
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
rev: 5.11.5
hooks:
- id: isort
- repo: https://github.com/pre-commit/mirrors-yapf

View File

@ -5,7 +5,7 @@ endif ()
message(STATUS "CMAKE_INSTALL_PREFIX: ${CMAKE_INSTALL_PREFIX}")
cmake_minimum_required(VERSION 3.14)
project(MMDeploy VERSION 0.9.0)
project(MMDeploy VERSION 0.14.0)
set(CMAKE_CXX_STANDARD 17)
@ -22,12 +22,12 @@ endif ()
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
# options
option(MMDEPLOY_SHARED_LIBS "build shared libs" ON)
option(MMDEPLOY_SHARED_LIBS "build shared libs" OFF)
option(MMDEPLOY_BUILD_SDK "build MMDeploy SDK" OFF)
option(MMDEPLOY_BUILD_SDK_MONOLITHIC "build single lib for SDK API" OFF)
option(MMDEPLOY_DYNAMIC_BACKEND "dynamic load backend" OFF)
option(MMDEPLOY_BUILD_SDK_MONOLITHIC "build single lib for SDK API" ON)
option(MMDEPLOY_BUILD_TEST "build unittests" OFF)
option(MMDEPLOY_BUILD_SDK_PYTHON_API "build SDK Python API" OFF)
option(MMDEPLOY_BUILD_SDK_CXX_API "build SDK C++ API" OFF)
option(MMDEPLOY_BUILD_SDK_CSHARP_API "build SDK C# API support" OFF)
option(MMDEPLOY_BUILD_SDK_JAVA_API "build SDK JAVA API" OFF)
option(MMDEPLOY_BUILD_EXAMPLES "build examples" OFF)
@ -40,6 +40,10 @@ set(MMDEPLOY_TARGET_DEVICES "cpu" CACHE STRING "target devices to support")
set(MMDEPLOY_TARGET_BACKENDS "" CACHE STRING "target inference engines to support")
set(MMDEPLOY_CODEBASES "all" CACHE STRING "select OpenMMLab codebases")
if ((NOT MMDEPLOY_BUILD_SDK_MONOLITHIC) AND MMDEPLOY_DYNAMIC_BACKEND)
set(MMDEPLOY_DYNAMIC_BACKEND OFF)
endif ()
if (NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Release CACHE STRING "choose 'Release' as default build type" FORCE)
endif ()
@ -73,8 +77,6 @@ endif ()
if (MSVC)
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/diagnostics:classic>)
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/Zc:preprocessor>) # /experimental:preprocessor on VS2017
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/Zc:__cplusplus>)
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/wd4251>)
endif ()
@ -97,10 +99,12 @@ include(cmake/MMDeploy.cmake)
add_subdirectory(csrc/mmdeploy)
if (MMDEPLOY_BUILD_SDK)
install(TARGETS MMDeployStaticModules
MMDeployDynamicModules
MMDeployLibs
EXPORT MMDeployTargets)
if (NOT MMDEPLOY_BUILD_SDK_MONOLITHIC)
install(TARGETS MMDeployStaticModules
MMDeployDynamicModules
MMDeployLibs
EXPORT MMDeployTargets)
endif ()
if (MMDEPLOY_BUILD_TEST)
add_subdirectory(tests/test_csrc)
@ -128,6 +132,7 @@ if (MMDEPLOY_BUILD_SDK)
mmdeploy_add_deps(pplnn BACKENDS ${MMDEPLOY_TARGET_BACKENDS} DEPS pplnn)
endif ()
mmdeploy_add_deps(snpe BACKENDS ${MMDEPLOY_TARGET_BACKENDS} DEPS snpe)
mmdeploy_add_deps(rknn BACKENDS ${MMDEPLOY_TARGET_BACKENDS} DEPS rknn)
include(CMakePackageConfigHelpers)
# generate the config file that is includes the exports

View File

@ -2,7 +2,9 @@ include requirements/*.txt
include mmdeploy/backend/ncnn/*.so
include mmdeploy/backend/ncnn/*.dll
include mmdeploy/backend/ncnn/*.pyd
include mmdeploy/backend/ncnn/mmdeploy_onnx2ncnn*
include mmdeploy/lib/*.so
include mmdeploy/lib/*.so*
include mmdeploy/lib/*.dll
include mmdeploy/lib/*.pyd
include mmdeploy/backend/torchscript/*.so

View File

@ -28,6 +28,16 @@
English | [简体中文](README_zh-CN.md)
## Highlights
The MMDeploy 1.x has been released, which is adapted to upstream codebases from OpenMMLab 2.0. Please **align the version** when using it.
The default branch has been switched to `main` from `master`. MMDeploy 0.x (`master`) will be deprecated and new features will only be added to MMDeploy 1.x (`main`) in future.
| mmdeploy | mmengine | mmcv | mmdet | others |
| :------: | :------: | :------: | :------: | :----: |
| 0.x.y | - | \<=1.x.y | \<=2.x.y | 0.x.y |
| 1.x.y | 0.x.y | 2.x.y | 3.x.y | 1.x.y |
## Introduction
MMDeploy is an open-source deep learning model deployment toolset. It is a part of the [OpenMMLab](https://openmmlab.com/) project.
@ -50,6 +60,7 @@ The currently supported codebases and models are as follows, and more will be in
- [mmpose](docs/en/04-supported-codebases/mmpose.md)
- [mmdet3d](docs/en/04-supported-codebases/mmdet3d.md)
- [mmrotate](docs/en/04-supported-codebases/mmrotate.md)
- [mmaction2](docs/en/04-supported-codebases/mmaction2.md)
### Multiple inference backends are available
@ -57,18 +68,22 @@ The supported Device-Platform-InferenceBackend matrix is presented as following,
The benchmark can be found from [here](docs/en/03-benchmark/benchmark.md)
| Device / Platform | Linux | Windows | macOS | Android |
| ----------------- | --------------------------------------------------------------- | --------------------------------------- | -------- | ---------------- |
| x86_64 CPU | ✔ONNX Runtime<br>pplnn<br>ncnn<br>OpenVINO<br>LibTorch | ✔ONNX Runtime<br>OpenVINO | - | - |
| ARM CPU | ✔ncnn | - | - | ✔ncnn |
| RISC-V | ✔ncnn | - | - | - |
| NVIDIA GPU | ✔ONNX Runtime<br>TensorRT<br>pplnn<br>LibTorch | ✔ONNX Runtime<br>TensorRT<br>pplnn | - | - |
| NVIDIA Jetson | ✔TensorRT | ✔TensorRT | - | - |
| Huawei ascend310 | ✔CANN | - | - | - |
| Rockchip | ✔RKNN | - | - | - |
| Apple M1 | - | - | ✔CoreML | - |
| Adreno GPU | - | - | - | ✔ncnn<br>SNPE |
| Hexagon DSP | - | - | - | ✔SNPE |
| Device / Platform | Linux | Windows | macOS | Android |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| x86_64 CPU | [![Build Status][pass-backend-ort]][ci-backend-ort]ONNXRuntime<br>[![Build Status][pass-backend-pplnn]][ci-backend-pplnn]pplnn<br>[![Build Status][pass-backend-ncnn]][ci-backend-ncnn]ncnn<br>[![Build Status][pass-backend-torchscript]][ci-backend-torchscript]LibTorch<br>[![Build Status][pass-build-rknpu]][ci-build-rknpu]OpenVINO<br>[![Build Status][pass-build-tvm]][ci-build-tvm]TVM | ![][pass-no-status]ONNXRuntime<br>![][pass-no-status]OpenVINO | - | - |
| ARM CPU | [![Build Status][pass-build-rknpu]][ci-build-rknpu]ncnn | - | - | [![Build Status][pass-build-rknpu]][ci-build-rknpu]ncnn |
| RISC-V | [![Build Status][pass-build-riscv64-gcc]][ci-build-riscv64-gcc]ncnn | - | - | - |
| NVIDIA GPU | ![Build Status][pass-no-status]ONNXRuntime<br>![Build Status][pass-no-status]TensorRT<br>![Build Status][pass-no-status]pplnn<br>![Build Status][pass-no-status]LibTorch<br>![Build Status][pass-no-status]TVM | ![Build Status][pass-no-status]ONNXRuntime<br>![Build Status][pass-no-status]TensorRT<br>![Build Status][pass-no-status]pplnn | - | - |
| NVIDIA Jetson | [![Build Status][pass-build-rknpu]][ci-build-rknpu]TensorRT | - | - | - |
| Huawei ascend310 | [![Build Status][pass-backend-ascend]][ci-backend-ascend]CANN | - | - | - |
| Rockchip | [![Build Status][pass-backend-rknn]][ci-backend-rknn]RKNN | - | - | - |
| Apple M1 | - | - | [![Build Status][pass-backend-coreml]][ci-backend-coreml]CoreML | - |
| Adreno GPU | - | - | - | [![Build Status][pass-backend-snpe]][ci-backend-snpe]SNPE<br>[![Build Status][pass-build-rknpu]][ci-build-rknpu]ncnn |
| Hexagon DSP | - | - | - | [![Build Status][pass-backend-snpe]][ci-backend-snpe]SNPE |
```
|
```
### Efficient and scalable C/C++ SDK Framework
@ -82,11 +97,14 @@ Please read [getting_started](docs/en/get_started.md) for the basic usage of MMD
- [Build from Docker](docs/en/01-how-to-build/build_from_docker.md)
- [Build from Script](docs/en/01-how-to-build/build_from_script.md)
- [Build for Linux](docs/en/01-how-to-build/linux-x86_64.md)
- [Build for Windows](docs/en/01-how-to-build/windows.md)
- [Build for macOS](docs/en/01-how-to-build/macos-arm64.md)
- [Build for Win10](docs/en/01-how-to-build/windows.md)
- [Build for Android](docs/en/01-how-to-build/android.md)
- [Build for Jetson](docs/en/01-how-to-build/jetsons.md)
- [Build for SNPE](docs/en/01-how-to-build/snpe.md)
- [Build for Rockchip](docs/en/01-how-to-build/rockchip.md)
- [Cross Build for aarch64](docs/en/01-how-to-build/cross_build_ncnn_aarch64.md)
- User Guide
- [How to convert model](docs/en/02-how-to-run/convert_model.md)
- [How to write config](docs/en/02-how-to-run/write_config.md)
@ -148,6 +166,7 @@ This project is released under the [Apache 2.0 license](LICENSE).
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
@ -162,3 +181,27 @@ This project is released under the [Apache 2.0 license](LICENSE).
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
[ci-backend-ascend]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ascend.yml
[ci-backend-coreml]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-coreml.yml
[ci-backend-ncnn]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ncnn.yml
[ci-backend-ort]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ort.yml
[ci-backend-pplnn]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-pplnn.yml
[ci-backend-rknn]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-rknn.yml
[ci-backend-snpe]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-snpe.yml
[ci-backend-torchscript]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-torchscript.yml
[ci-build-riscv64-gcc]: https://github.com/open-mmlab/mmdeploy/actions/workflows/linux-riscv64-gcc.yml
[ci-build-rknpu]: https://github.com/open-mmlab/mmdeploy/actions/workflows/linux-rknpu.yml
[ci-build-tvm]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-tvm.yml
[pass-backend-ascend]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ascend.yml?branch=master
[pass-backend-coreml]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-coreml.yml?branch=master
[pass-backend-ncnn]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ncnn.yml?branch=master
[pass-backend-ort]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ort.yml?branch=master
[pass-backend-pplnn]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-pplnn.yml?branch=master
[pass-backend-rknn]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-rknn.yml?branch=master
[pass-backend-snpe]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-snpe.yml?branch=master
[pass-backend-torchscript]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ort.yml?branch=master
[pass-build-riscv64-gcc]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/linux-riscv64-gcc.yml?branch=master
[pass-build-rknpu]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-rknn.yml?branch=master
[pass-build-tvm]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-tvm.yml?branch=master
[pass-no-status]: https://img.shields.io/badge/build-no%20status-lightgrey

View File

@ -28,6 +28,16 @@
[English](README.md) | 简体中文
## MMDeploy 1.x 版本
全新的 MMDeploy 1.x 已发布该版本适配OpenMMLab 2.0生态体系,使用时务必**对齐版本**。
MMDeploy 代码库默认分支从`master`切换至`main`。 MMDeploy 0.x (`master`)将逐步废弃,新特性将只添加到 MMDeploy 1.x (`main`)。
| mmdeploy | mmengine | mmcv | mmdet | mmcls and others |
| :------: | :------: | :------: | :------: | :--------------: |
| 0.x.y | - | \<=1.x.y | \<=2.x.y | 0.x.y |
| 1.x.y | 0.x.y | 2.x.y | 3.x.y | 1.x.y |
## 介绍
MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为各算法库提供统一的部署体验**。基于 MMDeploy开发者可以轻松从训练 repo 生成指定硬件所需 SDK省去大量适配时间。
@ -50,23 +60,28 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
- [mmpose](docs/zh_cn/04-supported-codebases/mmpose.md)
- [mmdet3d](docs/zh_cn/04-supported-codebases/mmdet3d.md)
- [mmrotate](docs/zh_cn/04-supported-codebases/mmrotate.md)
- [mmaction2](docs/zh_cn/04-supported-codebases/mmaction2.md)
### 支持多种推理后端
支持的设备平台和推理引擎如下表所示。benchmark请参考[这里](docs/zh_cn/03-benchmark/benchmark.md)
| Device / Platform | Linux | Windows | macOS | Android |
| ----------------- | --------------------------------------------------------------- | --------------------------------------- | -------- | ---------------- |
| x86_64 CPU | ✔ONNX Runtime<br>pplnn<br>ncnn<br>OpenVINO<br>LibTorch | ✔ONNX Runtime<br>OpenVINO | - | - |
| ARM CPU | ✔ncnn | - | - | ✔ncnn |
| RISC-V | ✔ncnn | - | - | - |
| NVIDIA GPU | ✔ONNX Runtime<br>TensorRT<br>pplnn<br>LibTorch | ✔ONNX Runtime<br>TensorRT<br>pplnn | - | - |
| NVIDIA Jetson | ✔TensorRT | ✔TensorRT | - | - |
| Huawei ascend310 | ✔CANN | - | - | - |
| Rockchip | ✔RKNN | - | - | - |
| Apple M1 | - | - | ✔CoreML | - |
| Adreno GPU | - | - | - | ✔ncnn<br>SNPE |
| Hexagon DSP | - | - | - | ✔SNPE |
| Device / Platform | Linux | Windows | macOS | Android |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| x86_64 CPU | [![Build Status][pass-backend-ort]][ci-backend-ort]ONNXRuntime<br>[![Build Status][pass-backend-pplnn]][ci-backend-pplnn]pplnn<br>[![Build Status][pass-backend-ncnn]][ci-backend-ncnn]ncnn<br>[![Build Status][pass-backend-torchscript]][ci-backend-torchscript]LibTorch<br>[![Build Status][pass-build-rknpu]][ci-build-rknpu]OpenVINO<br>[![Build Status][pass-build-tvm]][ci-build-tvm]TVM | ![][pass-no-status]ONNXRuntime<br>![][pass-no-status]OpenVINO | - | - |
| ARM CPU | [![Build Status][pass-build-rknpu]][ci-build-rknpu]ncnn | - | - | [![Build Status][pass-build-rknpu]][ci-build-rknpu]ncnn |
| RISC-V | [![Build Status][pass-build-riscv64-gcc]][ci-build-riscv64-gcc]ncnn | - | - | - |
| NVIDIA GPU | ![Build Status][pass-no-status]ONNXRuntime<br>![Build Status][pass-no-status]TensorRT<br>![Build Status][pass-no-status]pplnn<br>![Build Status][pass-no-status]LibTorch<br>![Build Status][pass-no-status]TVM | ![Build Status][pass-no-status]ONNXRuntime<br>![Build Status][pass-no-status]TensorRT<br>![Build Status][pass-no-status]pplnn | - | - |
| NVIDIA Jetson | [![Build Status][pass-build-rknpu]][ci-build-rknpu]TensorRT | - | - | - |
| Huawei ascend310 | [![Build Status][pass-backend-ascend]][ci-backend-ascend]CANN | - | - | - |
| Rockchip | [![Build Status][pass-backend-rknn]][ci-backend-rknn]RKNN | - | - | - |
| Apple M1 | - | - | [![Build Status][pass-backend-coreml]][ci-backend-coreml]CoreML | - |
| Adreno GPU | - | - | - | [![Build Status][pass-backend-snpe]][ci-backend-snpe]SNPE<br>[![Build Status][pass-build-rknpu]][ci-build-rknpu]ncnn |
| Hexagon DSP | - | - | - | [![Build Status][pass-backend-snpe]][ci-backend-snpe]SNPE |
```
|
```
### SDK 可高度定制化
@ -81,11 +96,14 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
- [一键式脚本安装](docs/zh_cn/01-how-to-build/build_from_script.md)
- [Build from Docker](docs/zh_cn/01-how-to-build/build_from_docker.md)
- [Build for Linux](docs/zh_cn/01-how-to-build/linux-x86_64.md)
- [Build for Windows](docs/zh_cn/01-how-to-build/windows.md)
- [Build for macOS](docs/zh_cn/01-how-to-build/macos-arm64.md)
- [Build for Win10](docs/zh_cn/01-how-to-build/windows.md)
- [Build for Android](docs/zh_cn/01-how-to-build/android.md)
- [Build for Jetson](docs/zh_cn/01-how-to-build/jetsons.md)
- [Build for SNPE](docs/zh_cn/01-how-to-build/snpe.md)
- [Build for Rockchip](docs/zh_cn/01-how-to-build/rockchip.md)
- [Cross Build for aarch64](docs/zh_cn/01-how-to-build/cross_build_ncnn_aarch64.md)
- 使用
- [把模型转换到推理 Backend](docs/zh_cn/02-how-to-run/convert_model.md)
- [配置转换参数](docs/zh_cn/02-how-to-run/write_config.md)
@ -153,6 +171,7 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台
- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO 系列工具箱和基准测试
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具包
@ -188,3 +207,27 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
- 🔥 提供与各行各业开发者充分交流的平台
干货满满 📘,等您来撩 💗OpenMMLab 社区期待您的加入 👬
[ci-backend-ascend]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ascend.yml
[ci-backend-coreml]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-coreml.yml
[ci-backend-ncnn]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ncnn.yml
[ci-backend-ort]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ort.yml
[ci-backend-pplnn]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-pplnn.yml
[ci-backend-rknn]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-rknn.yml
[ci-backend-snpe]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-snpe.yml
[ci-backend-torchscript]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-torchscript.yml
[ci-build-riscv64-gcc]: https://github.com/open-mmlab/mmdeploy/actions/workflows/linux-riscv64-gcc.yml
[ci-build-rknpu]: https://github.com/open-mmlab/mmdeploy/actions/workflows/linux-rknpu.yml
[ci-build-tvm]: https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-tvm.yml
[pass-backend-ascend]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ascend.yml?branch=master
[pass-backend-coreml]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-coreml.yml?branch=master
[pass-backend-ncnn]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ncnn.yml?branch=master
[pass-backend-ort]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ort.yml?branch=master
[pass-backend-pplnn]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-pplnn.yml?branch=master
[pass-backend-rknn]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-rknn.yml?branch=master
[pass-backend-snpe]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-snpe.yml?branch=master
[pass-backend-torchscript]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ort.yml?branch=master
[pass-build-riscv64-gcc]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/linux-riscv64-gcc.yml?branch=master
[pass-build-rknpu]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-rknn.yml?branch=master
[pass-build-tvm]: https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-tvm.yml?branch=master
[pass-no-status]: https://img.shields.io/badge/build-no%20status-lightgrey

View File

@ -1,6 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved.
function (mmdeploy_export NAME)
function (mmdeploy_export_impl NAME)
set(_LIB_DIR lib)
if (MSVC)
set(_LIB_DIR bin)
@ -12,6 +12,24 @@ function (mmdeploy_export NAME)
RUNTIME DESTINATION bin)
endfunction ()
macro(mmdeploy_add_net NAME)
if (MMDEPLOY_DYNAMIC_BACKEND)
mmdeploy_add_library(${NAME} SHARED ${ARGN})
# DYNAMIC_BACKEND implies BUILD_SDK_MONOLITHIC
mmdeploy_export_impl(${NAME})
target_link_libraries(${PROJECT_NAME} PRIVATE mmdeploy)
set(BACKEND_LIB_NAMES ${BACKEND_LIB_NAMES} ${PROJECT_NAME} PARENT_SCOPE)
else ()
mmdeploy_add_module(${NAME} ${ARGN})
endif ()
endmacro()
function (mmdeploy_export NAME)
if (NOT MMDEPLOY_BUILD_SDK_MONOLITHIC)
mmdeploy_export_impl(${NAME})
endif ()
endfunction ()
function (mmdeploy_add_library NAME)
# EXCLUDE: exclude from registering & exporting

View File

@ -10,13 +10,12 @@ set(MMDEPLOY_TARGET_DEVICES @MMDEPLOY_TARGET_DEVICES@)
set(MMDEPLOY_TARGET_BACKENDS @MMDEPLOY_TARGET_BACKENDS@)
set(MMDEPLOY_BUILD_TYPE @CMAKE_BUILD_TYPE@)
set(MMDEPLOY_BUILD_SHARED @MMDEPLOY_SHARED_LIBS@)
set(MMDEPLOY_BUILD_SDK_CXX_API @MMDEPLOY_BUILD_SDK_CXX_API@)
set(MMDEPLOY_BUILD_SDK_MONOLITHIC @MMDEPLOY_BUILD_SDK_MONOLITHIC@)
set(MMDEPLOY_VERSION_MAJOR @MMDEPLOY_VERSION_MAJOR@)
set(MMDEPLOY_VERSION_MINOR @MMDEPLOY_VERSION_MINOR@)
set(MMDEPLOY_VERSION_PATCH @MMDEPLOY_VERSION_PATCH@)
if (NOT MMDEPLOY_BUILD_SHARED)
if (NOT MMDEPLOY_BUILD_SHARED AND NOT MMDEPLOY_BUILD_SDK_MONOLITHIC)
if ("cuda" IN_LIST MMDEPLOY_TARGET_DEVICES)
find_package(CUDA REQUIRED)
if(MSVC)

View File

@ -11,6 +11,37 @@ if (MSVC OR (NOT DEFINED CMAKE_CUDA_RUNTIME_LIBRARY))
set(CUDA_USE_STATIC_CUDA_RUNTIME OFF)
endif ()
if (MSVC)
# no plugin in BuildCustomizations and no specify cuda toolset
if (NOT CMAKE_VS_PLATFORM_TOOLSET_CUDA)
message(FATAL_ERROR "Please install CUDA MSBuildExtensions")
endif ()
if (CMAKE_VS_PLATFORM_TOOLSET_CUDA_CUSTOM_DIR)
# find_package(CUDA) required ENV{CUDA_PATH}
set(ENV{CUDA_PATH} ${CMAKE_VS_PLATFORM_TOOLSET_CUDA_CUSTOM_DIR})
else ()
# we use CUDA_PATH and ignore nvcc.exe
# cmake will import highest cuda props version, which may not equal to CUDA_PATH
if (NOT (DEFINED ENV{CUDA_PATH}))
message(FATAL_ERROR "Please set CUDA_PATH environment variable")
endif ()
string(REGEX REPLACE ".*v([0-9]+)\\..*" "\\1" _MAJOR $ENV{CUDA_PATH})
string(REGEX REPLACE ".*v[0-9]+\\.([0-9]+).*" "\\1" _MINOR $ENV{CUDA_PATH})
if (NOT (${CMAKE_VS_PLATFORM_TOOLSET_CUDA} STREQUAL "${_MAJOR}.${_MINOR}"))
message(FATAL_ERROR "Auto detected cuda version ${CMAKE_VS_PLATFORM_TOOLSET_CUDA}"
" is mismatch with ENV{CUDA_PATH} $ENV{CUDA_PATH}. Please modify CUDA_PATH"
" to match ${CMAKE_VS_PLATFORM_TOOLSET_CUDA} or specify cuda toolset by"
" cmake -T cuda=/path/to/cuda ..")
endif ()
if (NOT (DEFINED ENV{CUDA_PATH_V${_MAJOR}_${_MINOR}}))
message(FATAL_ERROR "Please set CUDA_PATH_V${_MAJOR}_${_MINOR} environment variable")
endif ()
endif ()
endif ()
# nvcc compiler settings
find_package(CUDA REQUIRED)
@ -42,6 +73,7 @@ if (NOT CMAKE_CUDA_ARCHITECTURES)
if (CUDA_VERSION_MAJOR VERSION_GREATER_EQUAL "8")
set(_NVCC_FLAGS "${_NVCC_FLAGS} -gencode arch=compute_60,code=sm_60")
set(_NVCC_FLAGS "${_NVCC_FLAGS} -gencode arch=compute_61,code=sm_61")
set(_NVCC_FLAGS "${_NVCC_FLAGS} -gencode arch=compute_62,code=sm_62")
endif ()
if (CUDA_VERSION_MAJOR VERSION_GREATER_EQUAL "9")
set(_NVCC_FLAGS "${_NVCC_FLAGS} -gencode arch=compute_70,code=sm_70")

View File

@ -1,15 +1,53 @@
// Copyright (c) OpenMMLab. All rights reserved.
#include <Windows.h>
#include <string>
#include <cstdio>
#ifdef _WIN32
#include <Windows.h>
#else
#include <dlfcn.h>
#endif
#ifdef _WIN32
#define LIBPREFIX ""
#define LIBSUFFIX ".dll"
#elif defined(__APPLE__)
#define LIBPREFIX "lib"
#define LIBSUFFIX ".dylib"
#else
#define LIBPREFIX "lib"
#define LIBSUFFIX ".so"
#endif
namespace mmdeploy {
namespace {
#ifdef _WIN32
inline static const std::wstring GetDllPath() {
HMODULE hm = NULL;
GetModuleHandleExW(GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS | GET_MODULE_HANDLE_EX_FLAG_UNCHANGED_REFCOUNT,
(LPWSTR)&GetDllPath, &hm);
std::wstring ret;
ret.resize(MAX_PATH);
GetModuleFileNameW(hm, &ret[0], ret.size());
ret = ret.substr(0, ret.find_last_of(L"/\\"));
return ret;
}
#endif
void* mmdeploy_load_library(const char* name) {
fprintf(stderr, "loading %s ...\n", name);
auto handle = LoadLibraryA(name);
#ifdef _WIN32
auto handle = LoadLibraryExA(name, NULL, LOAD_LIBRARY_SEARCH_USER_DIRS);
if (handle == NULL) {
handle = LoadLibraryExA(name, NULL, LOAD_WITH_ALTERED_SEARCH_PATH);
}
#else
auto handle = dlopen(name, RTLD_NOW | RTLD_GLOBAL);
#endif
if (!handle) {
fprintf(stderr, "failed to load library %s\n", name);
return nullptr;
@ -22,11 +60,15 @@ void* mmdeploy_load_library(const char* name) {
class Loader {
public:
Loader() {
#ifdef _WIN32
AddDllDirectory(GetDllPath().c_str());
#endif
const char* modules[] = {
@_MMDEPLOY_DYNAMIC_MODULES@
};
for (const auto name : modules) {
mmdeploy_load_library(name);
std::string libname = std::string{} + LIBPREFIX + name + LIBSUFFIX;
mmdeploy_load_library(libname.c_str());
}
}
};

View File

@ -0,0 +1,47 @@
# Copyright (c) OpenMMLab. All rights reserved.
if (NOT DEFINED TVM_DIR)
set(TVM_DIR $ENV{TVM_DIR})
endif ()
if (NOT TVM_DIR)
message(FATAL_ERROR "Please set TVM_DIR with cmake -D option.")
endif()
find_path(
TVM_INCLUDE_DIR tvm/runtime/c_runtime_api.h
HINTS ${TVM_DIR}
PATH_SUFFIXES include)
find_path(
DMLC_CORE_INCLUDE_DIR dmlc/io.h
HINTS ${TVM_DIR}/3rdparty/dmlc-core
PATH_SUFFIXES include)
find_path(
DLPACK_INCLUDE_DIR dlpack/dlpack.h
HINTS ${TVM_DIR}/3rdparty/dlpack
PATH_SUFFIXES include)
find_library(
TVM_LIBRARY_PATH tvm_runtime
HINTS ${TVM_DIR}
PATH_SUFFIXES build lib build/${CMAKE_BUILD_TYPE})
if (NOT (TVM_INCLUDE_DIR AND DMLC_CORE_INCLUDE_DIR AND DLPACK_INCLUDE_DIR AND TVM_LIBRARY_PATH))
message(FATAL_ERROR "Couldn't find tvm in TVM_DIR: "
"${TVM_DIR}, please check if the path is correct.")
endif()
add_library(tvm_runtime SHARED IMPORTED)
set_property(TARGET tvm_runtime APPEND PROPERTY IMPORTED_CONFIGURATIONS RELEASE)
if (MSVC)
set_target_properties(tvm_runtime PROPERTIES
IMPORTED_IMPLIB_RELEASE ${TVM_LIBRARY_PATH}
INTERFACE_INCLUDE_DIRECTORIES ${TVM_INCLUDE_DIR} ${DMLC_CORE_INCLUDE_DIR} ${DLPACK_INCLUDE_DIR}
)
else()
set_target_properties(tvm_runtime PROPERTIES
IMPORTED_LOCATION_RELEASE ${TVM_LIBRARY_PATH}
INTERFACE_INCLUDE_DIRECTORIES ${TVM_INCLUDE_DIR} ${DMLC_CORE_INCLUDE_DIR} ${DLPACK_INCLUDE_DIR}
)
endif()

View File

@ -29,6 +29,7 @@ else ()
message(FATAL_ERROR "Cannot find TensorRT libs")
endif ()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(TENSORRT DEFAULT_MSG TENSORRT_INCLUDE_DIR
TENSORRT_LIBRARY)
if (NOT TENSORRT_FOUND)

View File

@ -0,0 +1,17 @@
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_C_COMPILER "aarch64-linux-gnu-gcc")
set(CMAKE_CXX_COMPILER "aarch64-linux-gnu-g++")
set(CMAKE_LINKER "aarch64-linux-gnu-ld")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_C_FLAGS "-march=armv8-a")
set(CMAKE_CXX_FLAGS "-march=armv8-a")
# cache flags
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}" CACHE STRING "c flags")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}" CACHE STRING "c++ flags")

View File

@ -0,0 +1,16 @@
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR arm)
set(CMAKE_C_COMPILER "arm-linux-gnueabihf-gcc")
set(CMAKE_CXX_COMPILER "arm-linux-gnueabihf-g++")
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_C_FLAGS "-march=armv7-a -mfloat-abi=hard -mfpu=neon")
set(CMAKE_CXX_FLAGS "-march=armv7-a -mfloat-abi=hard -mfpu=neon")
# cache flags
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}" CACHE STRING "c flags")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}" CACHE STRING "c++ flags")

View File

@ -0,0 +1,23 @@
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR rockchip)
if(DEFINED ENV{RKNN_TOOL_CHAIN})
file(TO_CMAKE_PATH $ENV{RKNN_TOOL_CHAIN} RKNN_TOOL_CHAIN)
else()
message(FATAL_ERROR "RKNN_TOOL_CHAIN env must be defined")
endif()
set(CMAKE_C_COMPILER ${RKNN_TOOL_CHAIN}/bin/aarch64-rockchip-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER ${RKNN_TOOL_CHAIN}/bin/aarch64-rockchip-linux-gnu-g++)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
set(CMAKE_C_FLAGS "-Wl,--allow-shlib-undefined")
set(CMAKE_CXX_FLAGS "-Wl,--allow-shlib-undefined")
# cache flags
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}" CACHE STRING "c flags")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}" CACHE STRING "c++ flags")

View File

@ -1 +1,11 @@
backend_config = dict(type='coreml', convert_to='mlprogram')
backend_config = dict(
type='coreml',
# mlprogram or neuralnetwork
convert_to='mlprogram',
common_config=dict(
# FLOAT16 or FLOAT32, see coremltools.precision
compute_precision='FLOAT32',
# iOS15, iOS16, etc., see coremltools.target
minimum_deployment_target='iOS16',
skip_model_load=False),
)

View File

@ -1,8 +1,6 @@
backend_config = dict(
type='rknn',
common_config=dict(
mean_values=None,
std_values=None,
target_platform='rk3588',
optimization_level=3),
quantization_config=dict(do_quantization=False, dataset=None))
target_platform='rv1126', # 'rk3588'
optimization_level=1),
quantization_config=dict(do_quantization=True, dataset=None))

View File

@ -0,0 +1 @@
backend_config = dict(type='tvm')

View File

@ -0,0 +1,15 @@
_base_ = ['./video-recognition_static.py']
onnx_config = dict(
dynamic_axes={
'input': {
0: 'batch',
1: 'num_crops * num_segs',
3: 'height',
4: 'width'
},
'output': {
0: 'batch',
}
},
input_shape=None)

View File

@ -0,0 +1,14 @@
_base_ = ['./video-recognition_static.py', '../../_base_/backends/tensorrt.py']
onnx_config = dict(input_shape=[224, 224])
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 250, 3, 224, 224],
opt_shape=[1, 250, 3, 224, 224],
max_shape=[1, 250, 3, 224, 224])))
])

View File

@ -0,0 +1,16 @@
_base_ = ['./video-recognition_static.py']
onnx_config = dict(
dynamic_axes={
'input': {
0: 'batch',
1: 'num_crops * num_segs',
3: 'time',
4: 'height',
5: 'width'
},
'output': {
0: 'batch',
}
},
input_shape=None)

View File

@ -0,0 +1,14 @@
_base_ = ['./video-recognition_static.py', '../../_base_/backends/tensorrt.py']
onnx_config = dict(input_shape=[256, 256])
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 30, 3, 32, 256, 256],
opt_shape=[1, 30, 3, 32, 256, 256],
max_shape=[1, 30, 3, 32, 256, 256])))
])

View File

@ -0,0 +1,5 @@
_base_ = [
'./video-recognition_static.py', '../../_base_/backends/onnxruntime.py'
]
onnx_config = dict(input_shape=None)

View File

@ -0,0 +1,19 @@
_base_ = ['./video-recognition_static.py', '../../_base_/backends/sdk.py']
codebase_config = dict(model_type='sdk')
# It will read SampleFrames from model_cfg to info
# in pipeline of backend_config
backend_config = dict(pipeline=[
dict(type='OpenCVInit', num_threads=1),
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=25,
test_mode=True),
dict(type='OpenCVDecode'),
dict(type='Collect', keys=['imgs'], meta_keys=[]),
dict(type='ListToNumpy', keys=['imgs'])
])

View File

@ -0,0 +1,3 @@
_base_ = ['../../_base_/onnx_config.py']
codebase_config = dict(type='mmaction', task='VideoRecognition')

View File

@ -0,0 +1,11 @@
_base_ = ['./classification_coreml_dynamic-224x224-224x224.py']
ir_config = dict(input_shape=(384, 384))
backend_config = dict(model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 384, 384],
max_shape=[1, 3, 384, 384],
default_shape=[1, 3, 384, 384])))
])

View File

@ -0,0 +1,7 @@
_base_ = ['./classification_static.py', '../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[224, 224])
codebase_config = dict(model_type='end2end')
backend_config = dict(
input_size_list=[[3, 224, 224]],
quantization_config=dict(do_quantization=False))

View File

@ -1,5 +1,5 @@
_base_ = ['./classification_static.py', '../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[224, 224])
codebase_config = dict(model_type='rknn')
codebase_config = dict(model_type='end2end')
backend_config = dict(input_size_list=[[3, 224, 224]])

View File

@ -0,0 +1,12 @@
_base_ = ['./classification_static.py', '../_base_/backends/tvm.py']
onnx_config = dict(input_shape=[224, 224])
backend_config = dict(model_inputs=[
dict(
shape=dict(input=[1, 3, 224, 224]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoScheduleTuner',
log_file='tvm_tune_log.log',
num_measure_trials=2000))
])

View File

@ -0,0 +1,16 @@
_base_ = ['./classification_tvm-autotvm_static-224x224.py']
calib_config = dict(create_calib=True, calib_file='calib_data.h5')
backend_config = dict(model_inputs=[
dict(
shape=dict(input=[1, 3, 224, 224]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoTVMTuner',
log_file='tvm_tune_log.log',
n_trial=1000,
tuner=dict(type='XGBTuner'),
),
qconfig=dict(calibrate_mode='kl_divergence', weight_scale='max'),
)
])

View File

@ -0,0 +1,13 @@
_base_ = ['./classification_static.py', '../_base_/backends/tvm.py']
onnx_config = dict(input_shape=[224, 224])
backend_config = dict(model_inputs=[
dict(
shape=dict(input=[1, 3, 224, 224]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoTVMTuner',
log_file='tvm_tune_log.log',
n_trial=1000,
tuner=dict(type='XGBTuner')))
])

View File

@ -0,0 +1,16 @@
_base_ = ['./base_torchscript.py', '../../_base_/backends/coreml.py']
ir_config = dict(
input_shape=(1344, 800), output_names=['dets', 'labels', 'masks'])
backend_config = dict(model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 800, 1344],
max_shape=[1, 3, 800, 1344],
default_shape=[1, 3, 800, 1344])))
])
# Don't know if this is necessary
codebase_config = dict(post_processing=dict(export_postprocess_mask=False))

View File

@ -0,0 +1,11 @@
_base_ = ['../_base_/base_torchscript.py', '../../_base_/backends/coreml.py']
ir_config = dict(input_shape=(608, 608))
backend_config = dict(model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 608, 608],
max_shape=[1, 3, 608, 608],
default_shape=[1, 3, 608, 608])))
])

View File

@ -0,0 +1,34 @@
_base_ = ['../_base_/base_static.py', '../../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[320, 320])
codebase_config = dict(model_type='rknn')
backend_config = dict(
input_size_list=[[3, 320, 320]],
quantization_config=dict(do_quantization=False))
# # yolov3, yolox for rknn-toolkit and rknn-toolkit2
# partition_config = dict(
# type='rknn', # the partition policy name
# apply_marks=True, # should always be set to True
# partition_cfg=[
# dict(
# save_file='model.onnx', # name to save the partitioned onnx
# start=['detector_forward:input'], # [mark_name:input, ...]
# end=['yolo_head:input'], # [mark_name:output, ...]
# output_names=[f'pred_maps.{i}' for i in range(3)]) # out names
# ])
# # retinanet, ssd, fsaf for rknn-toolkit2
# partition_config = dict(
# type='rknn', # the partition policy name
# apply_marks=True,
# partition_cfg=[
# dict(
# save_file='model.onnx',
# start='detector_forward:input',
# end=['BaseDenseHead:output'],
# output_names=[f'BaseDenseHead.cls.{i}' for i in range(5)] +
# [f'BaseDenseHead.loc.{i}' for i in range(5)])
# ])

View File

@ -0,0 +1,32 @@
_base_ = ['../_base_/base_static.py', '../../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[320, 320])
codebase_config = dict(model_type='rknn')
backend_config = dict(input_size_list=[[3, 320, 320]])
# # yolov3, yolox for rknn-toolkit and rknn-toolkit2
# partition_config = dict(
# type='rknn', # the partition policy name
# apply_marks=True, # should always be set to True
# partition_cfg=[
# dict(
# save_file='model.onnx', # name to save the partitioned onnx
# start=['detector_forward:input'], # [mark_name:input, ...]
# end=['yolo_head:input'], # [mark_name:output, ...]
# output_names=[f'pred_maps.{i}' for i in range(3)]) # out names
# ])
# # retinanet, ssd, fsaf for rknn-toolkit2
# partition_config = dict(
# type='rknn', # the partition policy name
# apply_marks=True,
# partition_cfg=[
# dict(
# save_file='model.onnx',
# start='detector_forward:input',
# end=['BaseDenseHead:output'],
# output_names=[f'BaseDenseHead.cls.{i}' for i in range(5)] +
# [f'BaseDenseHead.loc.{i}' for i in range(5)])
# ])

View File

@ -1,17 +0,0 @@
_base_ = ['../_base_/base_static.py', '../../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[640, 640])
codebase_config = dict(model_type='rknn')
backend_config = dict(input_size_list=[[3, 640, 640]])
partition_config = dict(
type='rknn', # the partition policy name
apply_marks=True, # should always be set to True
partition_cfg=[
dict(
save_file='model.onnx', # name to save the partitioned onnx model
start=['detector_forward:input'], # [mark_name:input/output, ...]
end=['yolo_head:input']) # [mark_name:input/output, ...]
])

View File

@ -0,0 +1,13 @@
_base_ = ['../_base_/base_static.py', '../../_base_/backends/tvm.py']
onnx_config = dict(input_shape=[1344, 800])
backend_config = dict(model_inputs=[
dict(
use_vm=True,
shape=dict(input=[1, 3, 800, 1344]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoScheduleTuner',
log_file='tvm_tune_log.log',
num_measure_trials=2000))
])

View File

@ -0,0 +1,15 @@
_base_ = ['../_base_/base_static.py', '../../_base_/backends/tvm.py']
onnx_config = dict(input_shape=[300, 300])
backend_config = dict(model_inputs=[
dict(
use_vm=True,
shape=dict(input=[1, 3, 300, 300]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoTVMTuner',
log_file='tvm_tune_log.log',
n_trial=1000,
tuner=dict(type='XGBTuner'),
))
])

View File

@ -0,0 +1,15 @@
_base_ = ['../_base_/base_static.py', '../../_base_/backends/tvm.py']
onnx_config = dict(input_shape=[1344, 800])
backend_config = dict(model_inputs=[
dict(
use_vm=True,
shape=dict(input=[1, 3, 800, 1344]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoTVMTuner',
log_file='tvm_tune_log.log',
n_trial=1000,
tuner=dict(type='XGBTuner'),
))
])

View File

@ -8,5 +8,6 @@ partition_config = dict(
dict(
save_file='yolov3.onnx',
start=['detector_forward:input'],
end=['yolo_head:input'])
end=['yolo_head:input'],
output_names=[f'pred_maps.{i}' for i in range(3)])
])

View File

@ -0,0 +1 @@
_base_ = ['../_base_/base_instance-seg_coreml_static-800x1344.py']

View File

@ -0,0 +1,15 @@
_base_ = [
'../_base_/base_instance-seg_static.py', '../../_base_/backends/tvm.py'
]
onnx_config = dict(input_shape=[1344, 800])
backend_config = dict(model_inputs=[
dict(
use_vm=True,
shape=dict(input=[1, 3, 800, 1344]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoScheduleTuner',
log_file='tvm_tune_log.log',
num_measure_trials=20000))
])

View File

@ -0,0 +1,17 @@
_base_ = [
'../_base_/base_instance-seg_static.py', '../../_base_/backends/tvm.py'
]
onnx_config = dict(input_shape=[1344, 800])
backend_config = dict(model_inputs=[
dict(
use_vm=True,
shape=dict(input=[1, 3, 800, 1344]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoTVMTuner',
log_file='tvm_tune_log.log',
n_trial=10000,
tuner=dict(type='XGBTuner'),
))
])

View File

@ -0,0 +1,24 @@
_base_ = ['./monocular-detection_static.py']
onnx_config = dict(
dynamic_axes={
'img': {
2: 'height',
3: 'width',
},
'bboxes': {
1: 'num_dets',
},
'scores': {
1: 'num_dets'
},
'labels': {
1: 'num_dets'
},
'dir_scores': {
1: 'num_dets'
},
'attrs': {
1: 'num_dets'
}
}, )

View File

@ -0,0 +1,3 @@
_base_ = [
'./monocular-detection_dynamic.py', '../../_base_/backends/onnxruntime.py'
]

View File

@ -0,0 +1,3 @@
_base_ = [
'./monocular-detection_static.py', '../../_base_/backends/onnxruntime.py'
]

View File

@ -0,0 +1,14 @@
_base_ = ['../../_base_/onnx_config.py']
onnx_config = dict(
input_names=['img', 'cam2img', 'cam2img_inverse'],
output_names=['bboxes', 'scores', 'labels', 'dir_scores', 'attrs'],
input_shape=None,
)
codebase_config = dict(
type='mmdet3d',
task='MonocularDetection',
model_type='end2end',
ann_file='tests/test_codebase/test_mmdet3d/data/nuscenes/n015-2018-07-24'
'-11-22-45+0800__CAM_BACK__1532402927637525_mono3d.coco.json',
)

View File

@ -0,0 +1,16 @@
_base_ = [
'./monocular-detection_static.py', '../../_base_/backends/tensorrt.py'
]
onnx_config = dict(input_shape=(1600, 928))
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 320, 320],
opt_shape=[1, 3, 928, 1600],
max_shape=[1, 3, 1600, 1600])))
])

View File

@ -0,0 +1,16 @@
_base_ = [
'./monocular-detection_static.py', '../../_base_/backends/tensorrt.py'
]
onnx_config = dict(input_shape=(1600, 928))
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 928, 1600],
opt_shape=[1, 3, 928, 1600],
max_shape=[1, 3, 928, 1600])))
])

View File

@ -3,4 +3,4 @@ codebase_config = dict(
type='mmdet3d', task='VoxelDetection', model_type='end2end')
onnx_config = dict(
input_names=['voxels', 'num_points', 'coors'],
output_names=['scores', 'bbox_preds', 'dir_scores'])
output_names=['bboxes', 'scores', 'labels'])

View File

@ -0,0 +1,20 @@
_base_ = ['./inpainting_static.py']
onnx_config = dict(
dynamic_axes=dict(
masked_img={
0: 'batch',
2: 'height',
3: 'width'
},
mask={
0: 'batch',
2: 'height',
3: 'width'
},
output={
0: 'batch',
2: 'height',
3: 'width'
}),
input_shape=None)

View File

@ -0,0 +1 @@
_base_ = ['./inpainting_dynamic.py', '../../_base_/backends/onnxruntime.py']

View File

@ -0,0 +1,3 @@
_base_ = ['./inpainting_static.py', '../../_base_/backends/onnxruntime.py']
onnx_config = dict(input_shape=[256, 256])

View File

@ -0,0 +1,5 @@
_base_ = ['../../_base_/onnx_config.py']
codebase_config = dict(type='mmedit', task='Inpainting')
onnx_config = dict(
input_names=['masked_img', 'mask'], output_names=['fake_img'])

View File

@ -0,0 +1,17 @@
_base_ = ['./inpainting_static.py', '../../_base_/backends/tensorrt-fp16.py']
onnx_config = dict(input_shape=[256, 256])
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
masked_img=dict(
min_shape=[1, 3, 256, 256],
opt_shape=[1, 3, 256, 256],
max_shape=[1, 3, 256, 256]),
mask=dict(
min_shape=[1, 1, 256, 256],
opt_shape=[1, 1, 256, 256],
max_shape=[1, 1, 256, 256])))
])

View File

@ -0,0 +1,17 @@
_base_ = ['./inpainting_static.py', '../../_base_/backends/tensorrt-int8.py']
onnx_config = dict(input_shape=[256, 256])
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
masked_img=dict(
min_shape=[1, 3, 256, 256],
opt_shape=[1, 3, 256, 256],
max_shape=[1, 3, 256, 256]),
mask=dict(
min_shape=[1, 1, 256, 256],
opt_shape=[1, 1, 256, 256],
max_shape=[1, 1, 256, 256])))
])

View File

@ -0,0 +1,17 @@
_base_ = ['./inpainting_static.py', '../../_base_/backends/tensorrt.py']
onnx_config = dict(input_shape=[256, 256])
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
masked_img=dict(
min_shape=[1, 3, 256, 256],
opt_shape=[1, 3, 256, 256],
max_shape=[1, 3, 256, 256]),
mask=dict(
min_shape=[1, 1, 256, 256],
opt_shape=[1, 1, 256, 256],
max_shape=[1, 1, 256, 256])))
])

View File

@ -0,0 +1,3 @@
_base_ = [
'./super-resolution_dynamic.py', '../../_base_/backends/ncnn-int8.py'
]

View File

@ -0,0 +1,13 @@
_base_ = [
'../../_base_/torchscript_config.py', '../../_base_/backends/coreml.py'
]
codebase_config = dict(type='mmocr', task='TextRecognition')
backend_config = dict(model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 32, 32],
max_shape=[1, 3, 32, 640],
default_shape=[1, 3, 32, 64])))
])

View File

@ -0,0 +1,3 @@
_base_ = ['./segmentation_static.py', '../_base_/backends/ncnn-int8.py']
onnx_config = dict(input_shape=[2048, 1024])

View File

@ -0,0 +1,9 @@
_base_ = ['./segmentation_static.py', '../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[320, 320])
codebase_config = dict(model_type='rknn')
backend_config = dict(
input_size_list=[[3, 320, 320]],
quantization_config=dict(do_quantization=False))

View File

@ -0,0 +1,7 @@
_base_ = ['./segmentation_static.py', '../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[320, 320])
codebase_config = dict(model_type='rknn', with_argmax=False)
backend_config = dict(input_size_list=[[3, 320, 320]])

View File

@ -1,7 +0,0 @@
_base_ = ['./segmentation_static.py', '../_base_/backends/rknn.py']
onnx_config = dict(input_shape=[512, 512])
codebase_config = dict(model_type='rknn')
backend_config = dict(input_size_list=[[3, 512, 512]])

View File

@ -1,2 +1,2 @@
_base_ = ['../_base_/onnx_config.py']
codebase_config = dict(type='mmseg', task='Segmentation')
codebase_config = dict(type='mmseg', task='Segmentation', with_argmax=True)

View File

@ -0,0 +1,12 @@
_base_ = ['./segmentation_static.py', '../_base_/backends/tvm.py']
onnx_config = dict(input_shape=[1024, 512])
backend_config = dict(model_inputs=[
dict(
shape=dict(input=[1, 3, 512, 1024]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoScheduleTuner',
log_file='tvm_tune_log.log',
num_measure_trials=2000))
])

View File

@ -0,0 +1,13 @@
_base_ = ['./segmentation_static.py', '../_base_/backends/tvm.py']
onnx_config = dict(input_shape=[1024, 512])
backend_config = dict(model_inputs=[
dict(
shape=dict(input=[1, 3, 512, 1024]),
dtype=dict(input='float32'),
tuner=dict(
type='AutoTVMTuner',
log_file='tvm_tune_log.log',
n_trial=1000,
tuner=dict(type='XGBTuner')))
])

View File

@ -13,6 +13,7 @@ if (MMDEPLOY_BUILD_SDK)
add_subdirectory(device)
add_subdirectory(graph)
add_subdirectory(model)
add_subdirectory(operation)
add_subdirectory(preprocess)
add_subdirectory(net)
add_subdirectory(codebase)

View File

@ -1,10 +1,5 @@
# Copyright (c) OpenMMLab. All rights reserved.
# Python API depends on C++ API
if (MMDEPLOY_BUILD_SDK_PYTHON_API)
set(MMDEPLOY_BUILD_SDK_CXX_API ON)
endif ()
add_subdirectory(c)
add_subdirectory(cxx)
add_subdirectory(java)

View File

@ -12,6 +12,9 @@ macro(add_object name)
target_compile_options(${name} PRIVATE $<$<COMPILE_LANGUAGE:CXX>:-fvisibility=hidden>)
endif ()
target_link_libraries(${name} PRIVATE mmdeploy::core)
target_include_directories(${name} PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>
$<INSTALL_INTERFACE:include>)
set(CAPI_OBJS ${CAPI_OBJS} ${name})
mmdeploy_export(${name})
endmacro()
@ -31,6 +34,7 @@ foreach (TASK ${COMMON_LIST})
mmdeploy_add_library(${TARGET_NAME})
target_link_libraries(${TARGET_NAME} PRIVATE ${OBJECT_NAME})
target_include_directories(${TARGET_NAME} PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>
$<INSTALL_INTERFACE:include>)
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/mmdeploy/${TASK}.h
DESTINATION include/mmdeploy)
@ -76,5 +80,14 @@ if (MMDEPLOY_BUILD_SDK_CSHARP_API OR MMDEPLOY_BUILD_SDK_MONOLITHIC)
set_target_properties(mmdeploy PROPERTIES
VERSION ${MMDEPLOY_VERSION}
SOVERSION ${MMDEPLOY_VERSION_MAJOR})
mmdeploy_export(mmdeploy)
if (APPLE)
set_target_properties(mmdeploy PROPERTIES
INSTALL_RPATH "@loader_path"
BUILD_RPATH "@loader_path")
else ()
set_target_properties(mmdeploy PROPERTIES
INSTALL_RPATH "\$ORIGIN"
BUILD_RPATH "\$ORIGIN")
endif ()
mmdeploy_export_impl(mmdeploy)
endif ()

View File

@ -1,52 +1,21 @@
// Copyright (c) OpenMMLab. All rights reserved.
#include "classifier.h"
#include "mmdeploy/classifier.h"
#include <numeric>
#include "common_internal.h"
#include "handle.h"
#include "mmdeploy/archive/value_archive.h"
#include "mmdeploy/codebase/mmcls/mmcls.h"
#include "mmdeploy/common_internal.h"
#include "mmdeploy/core/device.h"
#include "mmdeploy/core/graph.h"
#include "mmdeploy/core/utils/formatter.h"
#include "pipeline.h"
#include "mmdeploy/handle.h"
#include "mmdeploy/pipeline.h"
using namespace mmdeploy;
using namespace std;
namespace {
Value config_template(const Model& model) {
// clang-format off
static Value v{
{
"pipeline", {
{"input", {"img"}},
{"output", {"cls"}},
{
"tasks", {
{
{"name", "classifier"},
{"type", "Inference"},
{"params", {{"model", "TBD"}}},
{"input", {"img"}},
{"output", {"cls"}}
}
}
}
}
}
};
// clang-format on
auto config = v;
config["pipeline"]["tasks"][0]["params"]["model"] = model;
return config;
}
} // namespace
int mmdeploy_classifier_create(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_classifier_t* classifier) {
mmdeploy_context_t context{};
@ -73,8 +42,7 @@ int mmdeploy_classifier_create_by_path(const char* model_path, const char* devic
int mmdeploy_classifier_create_v2(mmdeploy_model_t model, mmdeploy_context_t context,
mmdeploy_classifier_t* classifier) {
auto config = config_template(*Cast(model));
return mmdeploy_pipeline_create_v3(Cast(&config), context, (mmdeploy_pipeline_t*)classifier);
return mmdeploy_pipeline_create_from_model(model, context, (mmdeploy_pipeline_t*)classifier);
}
int mmdeploy_classifier_create_input(const mmdeploy_mat_t* mats, int mat_count,

Some files were not shown because too many files have changed in this diff Show More