computer-visiondeep-learningdeploymentmmdetectionmmsegmentationmodel-converterncnnonnxonnxruntimeopenvinopplnnpytorchsdktensorrt
* add cuda11.1 config * add onnx * update * remove onnx * lock mmcls version |
||
---|---|---|
.github/workflows | ||
backend_ops | ||
configs | ||
docs | ||
mmdeploy | ||
requirements | ||
tests | ||
third_party | ||
tools | ||
.gitignore | ||
.gitmodules | ||
.isort.cfg | ||
.pre-commit-config.yaml | ||
.pylintrc | ||
CMakeLists.txt | ||
LICENSE | ||
MANIFEST.in | ||
README.md | ||
requirements.txt | ||
setup.cfg | ||
setup.py |
README.md
MMDeployment
Installation
-
Build backend ops
-
update submodule
git submodule update --init
-
Build with onnxruntime support
mkdir build cd build cmake -DBUILD_ONNXRUNTIME_OPS=ON -DONNXRUNTIME_DIR=${PATH_TO_ONNXRUNTIME} .. make -j10
-
Build with tensorrt support
mkdir build cd build cmake -DBUILD_TENSORRT_OPS=ON -DTENSORRT_DIR=${PATH_TO_TENSORRT} .. make -j10
-
Build with ncnn support
mkdir build cd build cmake -DBUILD_NCNN_OPS=ON -DNCNN_DIR=${PATH_TO_NCNN} .. make -j10
-
Or you can add multiple flags to build multiple backend ops.
-
-
Setup project
python setup.py develop
Usage
python ./tools/deploy.py \
${DEPLOY_CFG_PATH} \
${MODEL_CFG_PATH} \
${MODEL_CHECKPOINT_PATH} \
${INPUT_IMG} \
--work-dir ${WORK_DIR} \
--device ${DEVICE} \
--log-level INFO