Sync v0.7.0 to dev-1.x ()

* make -install -> make install ()

change `make -install` to `make install`

https://github.com/open-mmlab/mmdeploy/issues/618

* [Fix] fix csharp api detector release result ()

* fix csharp api detector release result

* fix wrong count arg of xxx_release_result in c# api

* [Enhancement] Support two-stage rotated detector TensorRT. ()

* upload

* add fake_multiclass_nms_rotated

* delete unused code

* align with pytorch

* Update delta_midpointoffset_rbbox_coder.py

* add trt rotated roi align

* add index feature in nms

* not good

* fix index

* add ut

* add benchmark

* move to csrc/mmdeploy

* update unit test

Co-authored-by: zytx121 <592267829@qq.com>

* Reduce mmcls version dependency ()

* fix shufflenetv2 with trt ()

* fix shufflenetv2 and pspnet

* fix ci

* remove print

* ' -> " ()

If there is a variable in the string, single quotes will ignored it, while double quotes will bring the variable into the string after parsing

* ' -> " ()

same with https://github.com/open-mmlab/mmdeploy/pull/654

* Support deployment of Segmenter ()

* support segmentor with ncnn

* update regression yml

* replace chunk with split to support ts

* update regression yml

* update docs

* fix segmenter ncnn inference failure brought by 

* add test

* fix test for ncnn and trt

* fix lint

* export nn.linear to Gemm op in onnx for ncnn

* fix ci

* simplify `Expand` ()

* Fix typo ()

* Add make install in en docs

* Add make install in zh docs

* Fix typo

* Merge and add windows build

Co-authored-by: tripleMu <865626@163.com>

* [Enhancement] Fix ncnn unittest ()

* optmize-csp-darknet

* replace floordiv to torch.div

* update csp_darknet default implement

* fix test

* [Enhancement] TensorRT Anchor generator plugin ()

* custom trt anchor generator

* add ut

* add docstring, update doc

* Add partition doc and sample code ()

* update torch2onnx tool to support onnx partition

* add model partition of yolov3

* add cn doc

* update torch2onnx tool to support onnx partition

* add model partition of yolov3

* add cn doc

* add to index.rst

* resolve comment

* resolve comments

* fix lint

* change caption level in docs

* update docs ()

* Add java apis and demos ()

* add java classifier detector

* add segmentor

* fix lint

* add ImageRestorer java apis and demo

* remove useless count parameter for Segmentor and Restorer, add PoseDetector

* add RotatedDetection java api and demo

* add Ocr java demo and apis

* remove mmrotate ncnn java api and demo

* fix lint

* sync java api folder after rebase to master

* fix include

* remove record

* fix java apis dir path in cmake

* add java demo readme

* fix lint mdformat

* add test javaapi ci

* fix lint

* fix flake8

* fix test javaapi ci

* refactor readme.md

* fix install opencv for ci

* fix install opencv : add permission

* add all codebases and mmcv install

* add torch

* install mmdeploy

* fix image path

* fix picture path

* fix import ncnn

* fix import ncnn

* add submodule of pybind

* fix pybind submodule

* change download to git clone for submodule

* fix ncnn dir

* fix README error

* simplify the github ci

* fix ci

* fix yapf

* add JNI as required

* fix Capitalize

* fix Capitalize

* fix copyright

* ignore .class changed

* add OpenJDK installation docs

* install target of javaapi

* simplify ci

* add jar

* fix ci

* fix ci

* fix test java command

* debugging what failed

* debugging what failed

* debugging what failed

* add java version info

* install openjdk

* add java env var

* fix export

* fix export

* fix export

* fix export

* fix picture path

* fix picture path

* fix file name

* fix file name

* fix README

* remove java_api strategy

* fix python version

* format task name

* move args position

* extract common utils code

* show image class result

* add detector result

* segmentation result format

* add ImageRestorer result

* add PoseDetection java result format

* fix ci

* stage ocr

* add visualize

* move utils

* fix lint

* fix ocr bugs

* fix ci demo

* fix java classpath for ci

* fix popd

* fix ocr demo text garbled

* fix ci

* fix ci

* fix ci

* fix path of utils ci

* update the circleci config file by adding workflows both for linux, windows and linux-gpu ()

* update circleci by adding more workflows

* fix test workflow failure on windows platform

* fix docker exec command for SDK unittests

* Fixed tensorrt plugin not found in Windows ()

* update introduction.png ()

* [Enhancement] Add fuse select assign pass ()

* Add fuse select assign pass

* move code to csrc

* add config flag

* remove bool cast

* fix export sdk info of input shape ()

* Update get_started.md ()

Fix backend model assignment

* Update get_started.md ()

Fix backend model assignment

* [Fix] fix clang build ()

* fix clang build

* fix ndk build

* fix ndk build

* switch to `std::filesystem` for clang-7 and later

* Deploy the Swin Transformer on TensorRT. ()

* resolve conflicts

* update ut and docs

* fix ut

* refine docstring

* add comments and refine UT

* resolve comments

* resolve comments

* update doc

* add roll export

* check backend

* update regression test

* bump version to 0.6.0 ()

* bump vertion to 0.6.0

* update version

* pass img_metas while exporting to onnx ()

* pass img_metas while exporting to onnx

* remove try-catch in tools for beter debugging

* use get

* fix typo

* [Fix] fix ssd ncnn ut ()

* fix ssd ncnn ut

* fix yapf

* fix passing img_metas to pytorch2onnx for mmedit ()

* fix passing img_metas for mmdet3d ()

* [Fix] Fix android build ()

* fix android build

* fix cmake

* fix url link

* fix wrong exit code in pipeline_manager ()

* fix exit

* change to general exit errorcode=1

* fix passing wrong backend type ()

* Rename onnx2ncnn to mmdeploy_onnx2ncnn ()

* improvement(tools/onnx2ncnn.py): rename to mmdeploy_onnx2ncnn

* format(tools/deploy.py): clean code

* fix(init_plugins.py): improve if condition

* fix(CI): update target

* fix(test_onnx2ncnn.py): update desc

* Update init_plugins.py

* [Fix] Fix mmdet ort static shape bug ()

* fix shape

* add device

* fix yapf

* fix rewriter for transforms

* reverse image shape

* fix ut of distance2bbox

* fix rewriter name

* fix c4 for torchscript ()

* [Enhancement] Standardize C API ()

* unify C API naming

* fix demo and move apis/c/* -> apis/c/mmdeploy/*

* fix lint

* fix C# project

* fix Java API

* [Enhancement] Support Slide Vertex TRT ()

* reorgnize mmrotate

* fix

* add hbb2obb

* add ut

* fix rotated nms

* update docs

* update benchmark

* update test

* remove ort regression test, remove comment

* Fix get-started rendering issues in readthedocs ()

* fix mermaid markdown rendering issue in readthedocs

* fix error in C++ example

* fix error in c++ example in zh_cn get_started doc

* [Fix] set default topk for dump info ()

* set default topk for dump info

* remove redundant docstrings

* add ci densenet

* fix classification warnings

* fix mmcls version

* fix logger.warnings

* add version control ()

* fix satrn for ORT ()

* fix satrn for ORT

* move rewrite into pytorch

* Add inference latency test tool ()

* add profile tool

* remove print envs in profile tool

* set cudnn_benchmark to True

* add doc

* update tests

* fix typo

* support test with images from a directory

* update doc

* resolve comments

* [Enhancement] Add CSE ONNX pass ()

* Add fuse select assign pass

* move code to csrc

* add config flag

* Add fuse select assign pass

* Add CSE for ONNX

* remove useless code

* Test robot

Just test robot

* Update README.md

Revert

* [Fix] fix yolox point_generator ()

* fix yolox point_generator

* add a UT

* resolve comments

* fix comment lines

* limit markdown version ()

* [Enhancement] Better index put ONNX export. ()

* Add rewriter for tensor setitem

* add version check

* Upgrade Dockerfile to use TensorRT==8.2.4.2 ()

* Upgrade TensorRT to 8.2.4.2

* upgrade pytorch&mmcv in CPU Dockerfile

* Delete redundant port example in Docker

* change 160x160-608x608 to 64x64-608x608 for yolov3

* [Fix] reduce log verbosity & improve error reporting ()

* reduce log verbosity & improve error reporting

* improve error reporting

* [Enhancement] Support latest ppl.nn & ppl.cv ()

* support latest ppl.nn

* fix pplnn for model convertor

* fix lint

* update memory policy

* import algo from buffer

* update ppl.cv

* use `ppl.cv==0.7.0`

* document supported ppl.nn version

* skip pplnn dependency when building shared libs

* [Fix][P0] Fix for torch1.12 ()

* fix for torch1.12

* add comment

* fix check env ()

* [Fix] fix cascade mask rcnn ()

* fix cascade mask rcnn

* fix lint

* add regression

* [Feature] Support RoITransRoIHead ()

* [Feature] Support RoITransRoIHead

* Add docs

* Add mmrotate models regression test

* Add a draft for test code

* change the argument name

* fix test code

* fix minor change for not class agnostic case

* fix sample for test code

* fix sample for test code

* Add mmrotate in requirements

* Revert "Add mmrotate in requirements"

This reverts commit 043490075e.

* [Fix] fix triu ()

* fix triu

* triu -> triu_default

* [Enhancement] Install Optimizer by setuptools ()

* Add fuse select assign pass

* move code to csrc

* add config flag

* Add fuse select assign pass

* Add CSE for ONNX

* remove useless code

* Install optimizer by setup tools

* fix comment

* [Feature] support MMRotate model with le135 ()

* support MMRotate model with le135

* cse before fuse select assign

* remove unused import

* [Fix] Support macOS build ()

* fix macOS build

* fix missing

* add option to build & install examples ()

* [Fix] Fix setup on non-linux-x64 ()

* fix setup

* replace long to int64_t

* [Feature] support build single sdk library ()

* build single lib for c api

* update csharp doc & project

* update test build

* fix test build

* fix

* update document for building android sdk ()

Co-authored-by: dwSun <dwsunny@icloud.com>

* [Enhancement] support kwargs in SDK python bindings ()

* support-kwargs

* make '__call__' as single image inference and add 'batch' API to deal with batch images inference

* fix linting error and typo

* fix lint

* improvement(sdk): add sdk code coverage ()

* feat(doc): add CI

* CI(sdk): add sdk coverage

* style(test): code format

* fix(CI): update coverage.info path

* improvement(CI): use internal image

* improvement(CI): push coverage info once

* [Feature] Add C++ API for SDK ()

* add C++ API

* unify result type & add examples

* minor fix

* install cxx API headers

* fix Mat, add more examples

* fix monolithic build & fix lint

* install examples correctly

* fix lint

* feat(tools/deploy.py): support snpe ()

* fix(tools/deploy.py): support snpe

* improvement(backend/snpe): review advices

* docs(backend/snpe): update build

* docs(backend/snpe): server support specify port

* docs(backend/snpe): update path

* fix(backend/snpe): time counter missing argument

* docs(backend/snpe): add missing argument

* docs(backend/snpe): update download and using

* improvement(snpe_net.cpp): load model with modeldata

* Support setup on environment with no PyTorch ()

* support test with multi batch ()

* support test with multi batch

* resolve comment

* import algorithm from buffer ()

* [Enhancement] build sdk python api in standard-alone manner ()

* build sdk python api in standard-alone manner

* enable MMDEPLOY_BUILD_SDK_MONOLITHIC and MMDEPLOY_BUILD_EXAMPLES in prebuild config

* link mmdeploy to python target when monolithic option is on

* checkin README to describe precompiled package build procedure

* use packaging.version.parse(python_version) instead of list(python_version)

* fix according to review results

* rebase master

* rollback cmake.in and apis/python/CMakeLists.txt

* reorganize files in install/example

* let cmake detect visual studio instead of specifying 2019

* rename whl name of precompiled package

* fix according to review results

* Fix SDK backend ()

* fix mmpose python api ()

* add prebuild package usage docs on windows ()

* add prebuild package usage docs on windows

* fix lint

* update

* try fix lint

* add en docs

* update

* update

* udpate faq

* fix typo ()

* [Enhancement] Improve get_started documents and bump version to 0.7.0 ()

* simplify commands in get_started

* add installation commands for Windows

* fix typo

* limit markdown and sphinx_markdown_tables version

* adopt html <details open> tag

* bump mmdeploy version

* bump mmdeploy version

* update get_started

* update get_started

* use python3.8 instead of python3.7

* remove duplicate section

* resolve issue 

* update according to review results

* add reference to prebuilt_package_windows.md

* fix error when build sdk demos

* fix mmcls

Co-authored-by: Ryan_Huang <44900829+DrRyanHuang@users.noreply.github.com>
Co-authored-by: Chen Xin <xinchen.tju@gmail.com>
Co-authored-by: q.yao <yaoqian@sensetime.com>
Co-authored-by: zytx121 <592267829@qq.com>
Co-authored-by: Li Zhang <lzhang329@gmail.com>
Co-authored-by: tripleMu <gpu@163.com>
Co-authored-by: tripleMu <865626@163.com>
Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: Bryan Glen Suello <11388006+bgsuello@users.noreply.github.com>
Co-authored-by: zambranohally <63218980+zambranohally@users.noreply.github.com>
Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com>
Co-authored-by: tpoisonooo <khj.application@aliyun.com>
Co-authored-by: Hakjin Lee <nijkah@gmail.com>
Co-authored-by: 孙德伟 <5899962+dwSun@users.noreply.github.com>
Co-authored-by: dwSun <dwsunny@icloud.com>
Co-authored-by: Chen Xin <irexyc@gmail.com>
pull/904/head
RunningLeon 2022-08-19 09:30:13 +08:00 committed by GitHub
parent 27a856637c
commit 4d8ea40f55
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
460 changed files with 15420 additions and 2707 deletions

View File

@ -1,36 +1,39 @@
# Use the latest 2.1 version of CircleCI pipeline process engine.
# See: https://circleci.com/docs/2.0/configuration-reference
version: 2.1
# Define a job to be invoked later in a workflow.
# See: https://circleci.com/docs/2.0/configuration-reference/#jobs
jobs:
lint:
# Specify the execution environment. You can specify an image from Dockerhub or use one of our Convenience Images from CircleCI's Developer Hub.
# See: https://circleci.com/docs/2.0/configuration-reference/#docker-machine-macos-windows-executor
docker:
- image: cimg/python:3.7.4
# Add steps to the job
# See: https://circleci.com/docs/2.0/configuration-reference/#steps
steps:
- checkout
- run:
name: Install pre-commit hook
command: |
pip install pre-commit
pre-commit install
- run:
name: Linting
command: pre-commit run --all-files
- run:
name: Check docstring coverage
command: |
pip install interrogate
interrogate -v --ignore-init-method --ignore-module --ignore-nested-functions --ignore-regex "__repr__" --fail-under 80 mmdeploy
# this allows you to use CircleCI's dynamic configuration feature
setup: true
# the path-filtering orb is required to continue a pipeline based on
# the path of an updated fileset
orbs:
path-filtering: circleci/path-filtering@0.1.2
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
pr_stage_test:
# the always-run workflow is always triggered, regardless of the pipeline parameters.
always-run:
jobs:
- lint
# the path-filtering/filter job determines which pipeline
# parameters to update.
- path-filtering/filter:
name: check-updated-files
# 3-column, whitespace-delimited mapping. One mapping per
# line:
# <regex path-to-test> <parameter-to-set> <value-of-pipeline-parameter>
mapping: |
.circle/.* lint_only false
cmake/.* lint_only false
configs/.* lint_only false
csrc/.* lint_only false
demo/csrc/.* lint_only false
docker/.* lint_only false
mmdeploy/.* lint_only false
requirements/.* lint_only false
tests/.* lint_only false
third_party/.* lint_only false
tools/.* lint_only false
base-revision: master
# this is the path of the configuration we should trigger once
# path filtering and pipeline parameter value updates are
# complete. In this case, we are using the parent dynamic
# configuration itself.
config-path: .circleci/test.yml

View File

@ -0,0 +1,41 @@
FROM nvcr.io/nvidia/tensorrt:21.04-py3
ARG CUDA=11.3
ARG PYTHON_VERSION=3.8
ARG TORCH_VERSION=1.10.0
ARG TORCHVISION_VERSION=0.11.0
ARG MMCV_VERSION=1.5.0
ARG PPLCV_VERSION=0.7.0
ENV FORCE_CUDA="1"
ENV DEBIAN_FRONTEND=noninteractive
### update apt and install libs
RUN apt-get update &&\
apt-get install -y libopencv-dev --no-install-recommends &&\
rm -rf /var/lib/apt/lists/*
RUN curl -fsSL -v -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda install -y python=${PYTHON_VERSION} && \
/opt/conda/bin/conda clean -ya
### pytorch
RUN /opt/conda/bin/conda install pytorch==${TORCH_VERSION} torchvision==${TORCHVISION_VERSION} cudatoolkit=${CUDA} -c pytorch -c conda-forge
ENV PATH /opt/conda/bin:$PATH
### install mmcv-full
RUN /opt/conda/bin/pip install mmcv-full==${MMCV_VERSION} -f https://download.openmmlab.com/mmcv/dist/cu${CUDA//./}/torch${TORCH_VERSION}/index.html
WORKDIR /workspace
### build ppl.cv
RUN git clone https://github.com/openppl-public/ppl.cv.git &&\
cd ppl.cv &&\
git checkout tags/v${PPLCV_VERSION} -b v${PPLCV_VERSION} &&\
./build.sh cuda
# RUN ln -sf /opt/conda /home/circleci/project/conda
ENV TENSORRT_DIR=/workspace/tensorrt

View File

@ -0,0 +1,16 @@
#!/bin/bash
ARGS=("$@")
cd mmdeploy
MMDEPLOY_DIR=$(pwd)
mkdir -p build && cd build
cmake .. -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_TEST=ON -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_BUILD_SDK_CXX_API=ON -DMMDEPLOY_BUILD_SDK_CSHARP_API=ON \
-DMMDEPLOY_TARGET_DEVICES="$1" -DMMDEPLOY_TARGET_BACKENDS="$2" "${ARGS[@]:2}"
make -j$(nproc) && make install
cd install/example
mkdir -p build
cd build
cmake ../cpp -DMMDeploy_DIR="$MMDEPLOY_DIR"/build/install/lib/cmake/MMDeploy "${ARGS[@]:2}" && make -j$(nproc)

View File

@ -0,0 +1,18 @@
#!/bin/bash
if [ $# != 2 ]; then
echo "wrong command. usage: bash converter.sh <codebase> <work dir>"
exit 1
fi
if [ "$1" == 'mmcls' ]; then
python3 -m pip install mmcls
git clone --recursive https://github.com/open-mmlab/mmclassification.git
wget https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.pth
python3 mmdeploy/tools/deploy.py \
mmdeploy/configs/mmcls/classification_onnxruntime_dynamic.py \
mmclassification/configs/resnet/resnet18_8xb32_in1k.py \
resnet18_8xb32_in1k_20210831-fbbb1da6.pth \
mmclassification/demo/demo.JPEG \
--work-dir "$2" --dump-info
fi

View File

@ -0,0 +1,30 @@
#!/bin/bash
if [ $# != 2 ]; then
echo "wrong command. usage: bash install_onnxruntime.sh <cpu|cuda> <version>"
exit 1
fi
PLATFORM=$1
VERSION=$2
if [ "$PLATFORM" == 'cpu' ]; then
python -m pip install onnxruntime=="$VERSION"
wget https://github.com/microsoft/onnxruntime/releases/download/v"$VERSION"/onnxruntime-linux-x64-"$VERSION".tgz
tar -zxvf onnxruntime-linux-x64-"$VERSION".tgz
ln -sf onnxruntime-linux-x64-"$VERSION" onnxruntime
elif [ "$PLATFORM" == 'cuda' ]; then
pip install onnxruntime-gpu=="$VERSION"
wget https://github.com/microsoft/onnxruntime/releases/download/v"$VERSION"/onnxruntime-linux-x64-gpu-"$VERSION".tgz
tar -zxvf onnxruntime-linux-x64-gpu-"$VERSION".tgz
ln -sf onnxruntime-linux-x64-gpu-"$VERSION" onnxruntime
else
echo "'$PLATFORM' is not supported"
exit 1
fi
export ONNXRUNTIME_DIR=$(pwd)/onnxruntime
echo "export ONNXRUNTIME_DIR=${ONNXRUNTIME_DIR}" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH" >> ~/.bashrc

View File

@ -0,0 +1,17 @@
#!/bin/bash
if [ $# -lt 1 ]; then
echo 'use python 3.8.5 as default'
PYTHON_VERSION=3.8.5
else
PYTHON_VERSION=$1
fi
sudo apt-get update
# liblzma-dev need to be installed. Refer to https://github.com/pytorch/vision/issues/2921
# python3-tk tk-dev is for 'import tkinter'
sudo apt-get install -y liblzma-dev python3-tk tk-dev
# python3+ need to be reinstalled due to https://github.com/pytorch/vision/issues/2921
pyenv uninstall -f "$PYTHON_VERSION"
pyenv install "$PYTHON_VERSION"
pyenv global "$PYTHON_VERSION"

View File

@ -0,0 +1,19 @@
if ($args.Count -lt 2) {
Write-Host "wrong command. usage: intall_onnxruntime.ps1 <cpu|cuda> <version>"
Exit 1
}
$platform = $args[0]
$version = $args[1]
if ($platform -eq "cpu") {
python -m pip install onnxruntime==$version
Invoke-WebRequest -Uri https://github.com/microsoft/onnxruntime/releases/download/v$version/onnxruntime-win-x64-$version.zip -OutFile onnxruntime.zip
Expand-Archive onnxruntime.zip .
Move-Item onnxruntime-win-x64-$version onnxruntime
} elseif ($platform == "cuda") {
Write-Host "TODO: install onnxruntime-gpu"
Exit
} else {
Write-Host "'$platform' is not supported"
}

View File

@ -0,0 +1,3 @@
Invoke-WebRequest -Uri https://download.openmmlab.com/mmdeploy/library/opencv-4.5.5.zip -OutFile opencv.zip
Expand-Archive opencv.zip .
Move-Item opencv-4.5.5 opencv

313
.circleci/test.yml 100644
View File

@ -0,0 +1,313 @@
# Use the latest 2.1 version of CircleCI pipeline process engine.
# See: https://circleci.com/docs/2.0/configuration-reference
version: 2.1
orbs:
win: circleci/windows@4.1
# the default pipeline parameters, which will be updated according to
# the results of the path-filtering orb
parameters:
lint_only:
type: boolean
default: true
executors:
ubuntu-2004-cpu:
machine:
image: ubuntu-2004:202010-01
resource_class: large
working_directory: ~
ubuntu-2004-cu114:
machine:
image: ubuntu-2004-cuda-11.4:202110-01
docker_layer_caching: true
resource_class: gpu.nvidia.medium
working_directory: ~
# MMDeploy Rules
# - In the command section, each command is requested to be os platform independent. Any command related to OS platform should be put in `scripts` folder
# - Use `python` instead of `python3` since there is no `python3` on Windows platform
# - DO NOT use `\` to break the line, as it is not identified correctly on Windows platform. So just don't break the line :)
commands:
checkout_full:
description: "Checkout mmdeploy"
steps:
- checkout:
path: mmdeploy # relative to `working_directory`
- run:
name: Checkout submodule
command: |
cd mmdeploy
git submodule sync
git submodule update --init
upgrade_pip:
steps:
- run:
name: Upgrade pip
command: python -m pip install --upgrade pip
install_pytorch:
parameters:
platform:
type: string
default: cpu
torch:
type: string
default: 1.8.0
torchvision:
type: string
default: 0.9.0
steps:
- run:
name: Install PyTorch
command: |
python -m pip install torch==<< parameters.torch >>+<< parameters.platform >> torchvision==<< parameters.torchvision >>+<< parameters.platform >> -f https://download.pytorch.org/whl/torch_stable.html
install_mmcv_cpu:
parameters:
version:
type: string
default: 1.5.0
torch:
type: string
default: 1.8.0
steps:
- run:
name: Install mmcv-full
command: |
python -m pip install opencv-python==4.5.4.60
python -m pip install mmcv-full==<< parameters.version >> -f https://download.openmmlab.com/mmcv/dist/cpu/torch<< parameters.torch >>/index.html
install_mmcv_cuda:
parameters:
version:
type: string
default: 1.5.0
cuda:
type: string
default: cu111
torch:
type: string
default: 1.8.0
steps:
- run:
name: Install mmcv-full
command: |
python -m pip install opencv-python==4.5.4.60
python -m pip install mmcv-full==<< parameters.version >> -f https://download.openmmlab.com/mmcv/dist/<< parameters.cuda >>/torch<< parameters.torch >>/index.html
install_mmdeploy:
description: "Install MMDeploy"
steps:
- run:
name: Install MMDeploy
command: |
cd mmdeploy
python -m pip install -v -e .
install_model_converter_req:
steps:
- run:
name: Install requirements
command: |
cd mmdeploy
python -m pip install -r requirements/codebases.txt
python -m pip install -r requirements/tests.txt
python -m pip install -r requirements/runtime.txt
python -m pip install -U numpy
cd ..
perform_model_converter_ut:
steps:
- run:
name: Perform Model Converter unittests
command: |
cd mmdeploy
coverage run --branch --source mmdeploy -m pytest -rsE tests
coverage xml
coverage report -m
cd ..
jobs:
lint:
# Specify the execution environment. You can specify an image from Dockerhub or use one of our Convenience Images from CircleCI's Developer Hub.
# See: https://circleci.com/docs/2.0/configuration-reference/#docker-machine-macos-windows-executor
docker:
- image: cimg/python:3.7.4
# Add steps to the job
# See: https://circleci.com/docs/2.0/configuration-reference/#steps
steps:
- checkout
- run:
name: Install pre-commit hook
command: |
pip install pre-commit
pre-commit install
- run:
name: Linting
command: pre-commit run --all-files
- run:
name: Check docstring coverage
command: |
pip install interrogate
interrogate -v --ignore-init-method --ignore-module --ignore-nested-functions --ignore-regex "__repr__" --fail-under 80 mmdeploy
test_linux_tensorrt:
executor: ubuntu-2004-cu114
steps:
- checkout_full
- run:
name: Build docker
command: |
docker build mmdeploy/.circleci/docker/ -t mmdeploy:gpu
- run:
name: Build MMDeploy
command: |
docker run --gpus all -t -d -v /home/circleci/project/:/project -w /project --name mmdeploy mmdeploy:gpu
docker exec mmdeploy bash mmdeploy/.circleci/scripts/linux/build.sh cuda trt -Dpplcv_DIR=/workspace/ppl.cv/cuda-build/install/lib/cmake/ppl
- run:
name: Install MMDeploy
# https://stackoverflow.com/questions/28037802/docker-exec-failed-cd-executable-file-not-found-in-path
command: |
docker exec -i mmdeploy bash -c "cd mmdeploy && pip install -v -e ."
- run:
name: Install requirements
command: |
docker exec mmdeploy pip install onnxruntime==1.8.1
docker exec mmdeploy pip install -r mmdeploy/requirements/codebases.txt
docker exec mmdeploy pip install -r mmdeploy/requirements/tests.txt
docker exec mmdeploy pip install -r mmdeploy/requirements/runtime.txt
docker exec mmdeploy pip install -U numpy
- run:
name: Perform Model Converter unittests
command: |
docker exec -i mmdeploy bash -c "cd mmdeploy && coverage run --branch --source mmdeploy -m pytest -rsE tests && coverage xml && coverage report -m"
- run:
name: Run SDK unittests
command: |
docker exec mmdeploy mkdir -p mmdeploy_test_resources/transform
docker exec mmdeploy cp mmdeploy/demo/resources/human-pose.jpg mmdeploy_test_resources/transform
docker exec mmdeploy ./mmdeploy/build/bin/mmdeploy_tests
test_windows_onnxruntime:
parameters:
version:
type: string
default: 1.8.1
executor:
name: win/default
steps:
- checkout_full
- upgrade_pip
- install_pytorch
- install_mmcv_cpu
- run:
name: Install ONNX Runtime
command: mmdeploy/.circleci/scripts/windows/install_onnxruntime.ps1 cpu << parameters.version >>
- run:
name: Install OpenCV
command: mmdeploy/.circleci/scripts/windows/install_opencv.ps1
- run:
name: Build MMDeploy
command: |
$env:path = "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin;" + $env:path
$env:ONNXRUNTIME_DIR = "$pwd\onnxruntime"
$env:OPENCV_PACKAGE_DIR = "$(pwd)\opencv"
$env:MMDEPLOY_DIR = "$(pwd)\mmdeploy"
cd mmdeploy
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
-DCMAKE_SYSTEM_VERSION="10.0.18362.0" `
-DMMDEPLOY_BUILD_SDK=ON `
-DMMDEPLOY_BUILD_TEST=ON `
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON `
-DMMDEPLOY_BUILD_SDK_CXX_API=ON `
-DMMDEPLOY_BUILD_SDK_CSHARP_API=ON `
-DMMDEPLOY_TARGET_BACKENDS="ort" `
-DOpenCV_DIR="$env:OPENCV_PACKAGE_DIR"
cmake --build . --config Release -- /m
cmake --install . --config Release
cd install/example
mkdir build -ErrorAction SilentlyContinue
cd build
cmake ../cpp -G "Visual Studio 16 2019" -A x64 -T v142 `
-DMMDeploy_DIR="$env:MMDEPLOY_DIR/build/install/lib/cmake/MMDeploy" `
-DOpenCV_DIR="$env:OPENCV_PACKAGE_DIR"
cmake --build . --config Release -- /m
- install_mmdeploy
- install_model_converter_req
- perform_model_converter_ut
- run:
name: Perform SDK Unittests
command: |
$env:path = "$(pwd)\onnxruntime\lib;" + $env:path
$env:path = "$(pwd)\opencv\x64\vc16\bin;" + $env:path
mkdir mmdeploy_test_resources\transform
cp .\mmdeploy\demo\resources\human-pose.jpg mmdeploy_test_resources\transform
.\mmdeploy\build\bin\Release\mmdeploy_tests.exe
test_linux_onnxruntime:
parameters:
version:
type: string
default: 1.8.1
executor: ubuntu-2004-cpu
steps:
- checkout_full
- run:
name: Re-install Python
command: bash mmdeploy/.circleci/scripts/linux/install_python.sh
- upgrade_pip
- install_pytorch
- install_mmcv_cpu
- run:
name: Install ONNX Runtime
command: bash mmdeploy/.circleci/scripts/linux/install_onnxruntime.sh cpu << parameters.version >>
- run:
name: Build MMDeploy
command: |
sudo apt-get update
sudo apt-get install libopencv-dev libpython3.8 python3.8-dev
bash mmdeploy/.circleci/scripts/linux/build.sh cpu ort
- install_mmdeploy
- install_model_converter_req
- perform_model_converter_ut
- run:
name: Perform SDK unittests
command: |
mkdir -p mmdeploy_test_resources/transform
cp -rf ./mmdeploy/demo/resources/human-pose.jpg mmdeploy_test_resources/transform
./mmdeploy/build/bin/mmdeploy_tests
- run:
name: Convert model
command: |
bash mmdeploy/.circleci/scripts/linux/convert_onnxruntime.sh mmcls mmdeploy-models/mmcls/onnxruntime
- run:
name: Inference model by SDK
command: |
mmdeploy/build/install/example/build/image_classification cpu mmdeploy-models/mmcls/onnxruntime mmclassification/demo/demo.JPEG
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
pr_stage_lint:
when: << pipeline.parameters.lint_only >>
jobs:
- lint
pr_stage_test:
when:
not:
<< pipeline.parameters.lint_only >>
jobs:
- lint
- test_linux_onnxruntime:
version: 1.8.1
requires:
- lint
- test_windows_onnxruntime:
version: 1.8.1
requires:
- lint
- hold:
type: approval
requires:
- test_linux_onnxruntime
- test_windows_onnxruntime
- test_linux_tensorrt:
requires:
- hold

View File

@ -0,0 +1,84 @@
# Copyright (c) OpenMMLab. All rights reserved.
import os
# list of dict: task name and deploy configs.
PARAMS = [
{
'task':
'ImageClassification',
'configs': [
'https://media.githubusercontent.com/media/hanrui1sensetime/mmdeploy-javaapi-testdata/master/resnet.tar' # noqa: E501
]
},
{
'task':
'ObjectDetection',
'configs': [
'https://media.githubusercontent.com/media/hanrui1sensetime/mmdeploy-javaapi-testdata/master/mobilessd.tar' # noqa: E501
]
},
{
'task':
'ImageSegmentation',
'configs': [
'https://media.githubusercontent.com/media/hanrui1sensetime/mmdeploy-javaapi-testdata/master/fcn.tar' # noqa: E501
]
},
{
'task':
'ImageRestorer',
'configs': [
'https://media.githubusercontent.com/media/hanrui1sensetime/mmdeploy-javaapi-testdata/master/srcnn.tar' # noqa: E501
]
},
{
'task':
'Ocr',
'configs': [
'https://media.githubusercontent.com/media/hanrui1sensetime/mmdeploy-javaapi-testdata/master/dbnet.tar', # noqa: E501
'https://media.githubusercontent.com/media/hanrui1sensetime/mmdeploy-javaapi-testdata/master/crnn.tar' # noqa: E501
]
},
{
'task':
'PoseDetection',
'configs': [
'https://media.githubusercontent.com/media/hanrui1sensetime/mmdeploy-javaapi-testdata/master/litehrnet.tar' # noqa: E501
]
}
]
def main():
"""test java apis and demos.
Run all java demos for test.
"""
for params in PARAMS:
task = params['task']
configs = params['configs']
java_demo_cmd = [
'java', '-cp', 'csrc/mmdeploy/apis/java:demo/java',
'demo/java/' + task + '.java', 'cpu'
]
for config in configs:
model_url = config
os.system('wget {} && tar xvf {}'.format(model_url,
model_url.split('/')[-1]))
model_dir = model_url.split('/')[-1].split('.')[0]
java_demo_cmd.append(model_dir)
java_demo_cmd.append('/home/runner/work/mmdeploy/mmdeploy/demo' +
'/resources/human-pose.jpg')
java_demo_cmd_str = ' '.join(java_demo_cmd)
os.system('export JAVA_HOME=/home/runner/work/mmdeploy/mmdeploy/' +
'jdk-18 && export PATH=${JAVA_HOME}/bin:${PATH} && java' +
' --version && export LD_LIBRARY_PATH=/home/runner/work/' +
'mmdeploy/mmdeploy/build/lib:${LD_LIBRARY_PATH} && ' +
java_demo_cmd_str)
if __name__ == '__main__':
main()

View File

@ -33,7 +33,8 @@ CONFIGS = [
def parse_args():
parser = argparse.ArgumentParser(
description='MMDeploy onnx2ncnn test tool.')
parser.add_argument('--run', type=bool, help='Execute onnx2ncnn bin.')
parser.add_argument(
'--run', type=bool, help='Execute mmdeploy_onnx2ncnn bin.')
parser.add_argument(
'--repo-dir', type=str, default='~/', help='mmcls directory.')
parser.add_argument(
@ -77,14 +78,16 @@ def run(args):
# show processbar
os.system(' '.join(download_cmd))
convert_cmd = ['./onnx2ncnn', filename, 'onnx.param', 'onnx.bin']
convert_cmd = [
'./mmdeploy_onnx2ncnn', filename, 'onnx.param', 'onnx.bin'
]
subprocess.run(convert_cmd, capture_output=True, check=True)
def main():
"""test `onnx2ncnn.cpp`
First generate onnx model then convert it with `onnx2ncnn`.
First generate onnx model then convert it with `mmdeploy_onnx2ncnn`.
"""
args = parse_args()
if args.generate_onnx:

View File

@ -1,4 +1,4 @@
name: backend
name: backend-ncnn
on:
push:
@ -23,7 +23,6 @@ jobs:
matrix:
python-version: [3.7]
torch: [1.9.0]
mmcv: [1.4.2]
include:
- torch: 1.9.0
torch_version: torch1.9
@ -59,10 +58,10 @@ jobs:
mkdir -p build && pushd build
export LD_LIBRARY_PATH=/home/runner/work/mmdeploy/mmdeploy/ncnn-20220420/install/lib/:$LD_LIBRARY_PATH
cmake -DMMDEPLOY_TARGET_BACKENDS=ncnn -Dncnn_DIR=/home/runner/work/mmdeploy/mmdeploy/ncnn-20220420/install/lib/cmake/ncnn/ ..
make onnx2ncnn -j2
make mmdeploy_onnx2ncnn -j2
popd
- name: Test onnx2ncnn
run: |
echo $(pwd)
ln -s build/bin/onnx2ncnn ./
ln -s build/bin/mmdeploy_onnx2ncnn ./
python3 .github/scripts/test_onnx2ncnn.py --run 1

View File

@ -0,0 +1,60 @@
name: backend-snpe
on:
push:
paths-ignore:
- "demo/**"
- "tools/**"
pull_request:
paths-ignore:
- "demo/**"
- "tools/**"
- "docs/**"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build_sdk_demo:
runs-on: ubuntu-18.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: update
run: sudo apt update
- name: Install dependencies
run: |
sudo apt install wget libprotobuf-dev protobuf-compiler
sudo apt update
sudo apt install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev libc++1-9 libc++abi1-9
sudo add-apt-repository ppa:ignaciovizzo/opencv3-nonfree
sudo apt install libopencv-dev
pkg-config --libs opencv
- name: Install snpe
run: |
wget https://media.githubusercontent.com/media/tpoisonooo/mmdeploy_snpe_testdata/main/snpe-1.59.tar.gz
tar xf snpe-1.59.tar.gz
pushd snpe-1.59.0.3230
pwd
popd
- name: Build SDK Demo with SNPE backend
run: |
mkdir -p build && pushd build
export SNPE_ROOT=/home/runner/work/mmdeploy/mmdeploy/snpe-1.59.0.3230
export LD_LIBRARY_PATH=${SNPE_ROOT}/lib/x86_64-linux-clang:${LD_LIBRARY_PATH}
export MMDEPLOY_SNPE_X86_CI=1
cmake .. -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_SHARED_LIBS=ON -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_SDK_PYTHON_API=OFF -DMMDEPLOY_TARGET_DEVICES=cpu -DMMDEPLOY_TARGET_BACKENDS=snpe -DMMDEPLOY_CODEBASES=all
make -j2
make install
pushd install/example
mkdir build && pushd build
cmake ../cpp -DMMDeploy_DIR=${PWD}/../../lib/cmake/MMDeploy
make -j2
ls ./*
popd
popd
popd

View File

@ -21,7 +21,7 @@ concurrency:
cancel-in-progress: true
jobs:
build_cpu:
build_cpu_model_convert:
runs-on: ubuntu-18.04
strategy:
matrix:
@ -53,12 +53,41 @@ jobs:
pip install -U numpy
- name: Build and install
run: rm -rf .eggs && pip install -e .
- name: Run unittests and generate coverage report
- name: Run python unittests and generate coverage report
run: |
coverage run --branch --source mmdeploy -m pytest -rsE tests
coverage xml
coverage report -m
build_cpu_sdk:
runs-on: ubuntu-18.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: update
run: sudo apt update
- name: gcc-multilib
run: |
sudo apt install gcc-multilib g++-multilib wget libprotobuf-dev protobuf-compiler
sudo apt update
sudo apt install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev libc++1-9 libc++abi1-9
sudo add-apt-repository ppa:ignaciovizzo/opencv3-nonfree
sudo apt install libopencv-dev lcov wget
pkg-config --libs opencv
- name: Build and run SDK unit test without backend
run: |
mkdir -p build && pushd build
cmake .. -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_CODEBASES=all -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_SDK_PYTHON_API=OFF -DMMDEPLOY_TARGET_DEVICES=cpu -DMMDEPLOY_COVERAGE=ON -DMMDEPLOY_BUILD_TEST=ON
make -j2
mkdir -p mmdeploy_test_resources/transform
cp ../tests/data/tiger.jpeg mmdeploy_test_resources/transform/
./bin/mmdeploy_tests
lcov --capture --directory . --output-file coverage.info
ls -lah coverage.info
cp coverage.info ../
build_cuda102:
runs-on: ubuntu-18.04
container:
@ -153,8 +182,8 @@ jobs:
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
with:
file: ./coverage.xml
file: ./coverage.xml,./coverage.info
flags: unittests
env_vars: OS,PYTHON
env_vars: OS,PYTHON,CPLUS
name: codecov-umbrella
fail_ci_if_error: false

72
.github/workflows/java_api.yml vendored 100644
View File

@ -0,0 +1,72 @@
name: java_api
on:
push:
paths-ignore:
- "tools/**"
pull_request:
paths-ignore:
- "tools/**"
- "docs/**"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
test_java_api:
runs-on: ubuntu-18.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: 'recursive'
- name: Set up Python 3.7
uses: actions/setup-python@v2
with:
python-version: 3.7
- name: Install unittest dependencies
run: |
pip install cmake onnx
- name: update
run: sudo apt update
- name: Install OpenJDK
run: |
wget https://download.java.net/java/GA/jdk18/43f95e8614114aeaa8e8a5fcf20a682d/36/GPL/openjdk-18_linux-x64_bin.tar.gz
tar xvf openjdk-18_linux-x64_bin.tar.gz
- name: gcc-multilib
run: sudo apt install gcc-multilib g++-multilib wget libprotobuf-dev protobuf-compiler
- name: Install onnxruntime
run: |
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
pushd onnxruntime-linux-x64-1.8.1
export ONNXRUNTIME_DIR=${PWD}
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
popd
- name: Install opencv
run: |
sudo apt-get install libopencv-dev
- name: Build java class
run: |
pushd csrc/mmdeploy/apis/java
javac mmdeploy/*.java
popd
pushd demo/java
javac -classpath ../../csrc/mmdeploy/apis/java/ Utils.java
popd
- name: Install mmdeploy with onnxruntime backend and java api
run: |
mkdir -p build && pushd build
export LD_LIBRARY_PATH=/home/runner/work/mmdeploy/mmdeploy/ncnn/install/lib/:$LD_LIBRARY_PATH
cmake -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_SDK_JAVA_API=ON -DMMDEPLOY_TARGET_BACKENDS=ort -DMMDEPLOY_CODEBASES=all -DONNXRUNTIME_DIR=~/work/mmdeploy/mmdeploy/onnxruntime-linux-x64-1.8.1 ..
make install
popd
- name: Test javademo
run: |
export JAVA_HOME=${PWD}/jdk-18
export PATH=${JAVA_HOME}/bin:${PATH}
export LD_LIBRARY_PATH=/build/lib:${LD_LIBRARY_PATH}
java --version
python3 .github/scripts/test_java_demo.py

8
.gitignore vendored
View File

@ -6,6 +6,10 @@ __pycache__/
# C extensions
*.so
onnx2ncnn
mmdeploy_onnx2ncnn
# Java classes
*.class
# Distribution / packaging
.Python
@ -148,3 +152,7 @@ bin/
mmdeploy/backend/ncnn/onnx2ncnn
/mmdeploy-*
# snpe
grpc-cpp-plugin
service/snpe/grpc_cpp_plugin

View File

@ -3,6 +3,7 @@ repos:
rev: 4.0.1
hooks:
- id: flake8
args: ["--exclude=*/client/inference_pb2.py,*/client/inference_pb2_grpc.py"]
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:

View File

@ -5,10 +5,13 @@ endif ()
message(STATUS "CMAKE_INSTALL_PREFIX: ${CMAKE_INSTALL_PREFIX}")
cmake_minimum_required(VERSION 3.14)
project(MMDeploy VERSION 0.5.0)
project(MMDeploy VERSION 0.7.0)
set(CMAKE_CXX_STANDARD 17)
set(MMDEPLOY_VERSION_MAJOR ${PROJECT_VERSION_MAJOR})
set(MMDEPLOY_VERSION_MINOR ${PROJECT_VERSION_MINOR})
set(MMDEPLOY_VERSION_PATCH ${PROJECT_VERSION_PATCH})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
if (MSVC)
@ -21,12 +24,17 @@ set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
# options
option(MMDEPLOY_SHARED_LIBS "build shared libs" ON)
option(MMDEPLOY_BUILD_SDK "build MMDeploy SDK" OFF)
option(MMDEPLOY_BUILD_SDK_MONOLITHIC "build single lib for SDK API" OFF)
option(MMDEPLOY_BUILD_TEST "build unittests" OFF)
option(MMDEPLOY_BUILD_SDK_PYTHON_API "build SDK Python API" OFF)
option(MMDEPLOY_BUILD_SDK_CXX_API "build SDK C++ API" OFF)
option(MMDEPLOY_BUILD_SDK_CSHARP_API "build SDK C# API support" OFF)
option(MMDEPLOY_BUILD_SDK_JAVA_API "build SDK JAVA API" OFF)
option(MMDEPLOY_BUILD_EXAMPLES "build examples" OFF)
option(MMDEPLOY_SPDLOG_EXTERNAL "use external spdlog" OFF)
option(MMDEPLOY_ZIP_MODEL "support SDK model in zip format" OFF)
option(MMDEPLOY_COVERAGE "build SDK for coverage" OFF)
set(MMDEPLOY_TARGET_DEVICES "cpu" CACHE STRING "target devices to support")
set(MMDEPLOY_TARGET_BACKENDS "" CACHE STRING "target inference engines to support")
set(MMDEPLOY_CODEBASES "all" CACHE STRING "select OpenMMLab codebases")
@ -43,6 +51,11 @@ endif ()
set(MMDEPLOY_TASKS "" CACHE INTERNAL "")
if (MMDEPLOY_COVERAGE)
add_compile_options(-coverage -fprofile-arcs -ftest-coverage)
add_link_options(-coverage -lgcov)
endif ()
# when CUDA devices are enabled, the environment variable ASAN_OPTIONS=protect_shadow_gap=0
# must be set at runtime
if (MMDEPLOY_ASAN_ENABLE)
@ -92,6 +105,15 @@ if (MMDEPLOY_BUILD_SDK)
add_subdirectory(csrc/mmdeploy/apis/python)
endif ()
if (MMDEPLOY_BUILD_SDK_JAVA_API)
add_subdirectory(csrc/mmdeploy/apis/java)
endif ()
if (MMDEPLOY_BUILD_EXAMPLES)
include(${CMAKE_SOURCE_DIR}/cmake/opencv.cmake)
add_subdirectory(demo/csrc)
endif ()
# export MMDeploy package
install(EXPORT MMDeployTargets
FILE MMDeployTargets.cmake
@ -105,7 +127,10 @@ if (MMDEPLOY_BUILD_SDK)
mmdeploy_add_deps(ort BACKENDS ${MMDEPLOY_TARGET_BACKENDS} DEPS ONNXRUNTIME)
mmdeploy_add_deps(ncnn BACKENDS ${MMDEPLOY_TARGET_BACKENDS} DEPS ncnn)
mmdeploy_add_deps(openvino BACKENDS ${MMDEPLOY_TARGET_BACKENDS} DEPS InferenceEngine)
mmdeploy_add_deps(pplnn BACKENDS ${MMDEPLOY_TARGET_BACKENDS} DEPS pplnn)
if (NOT MMDEPLOY_SHARED_LIBS)
mmdeploy_add_deps(pplnn BACKENDS ${MMDEPLOY_TARGET_BACKENDS} DEPS pplnn)
endif ()
mmdeploy_add_deps(snpe BACKENDS ${MMDEPLOY_TARGET_BACKENDS} DEPS snpe)
include(CMakePackageConfigHelpers)
# generate the config file that is includes the exports
@ -141,8 +166,6 @@ if (MMDEPLOY_BUILD_SDK)
DESTINATION lib/cmake/MMDeploy
)
install(DIRECTORY ${CMAKE_SOURCE_DIR}/demo/csrc/ DESTINATION example)
if (${CMAKE_VERSION} VERSION_LESS "3.17.0")
install(SCRIPT cmake/post-install.cmake)
endif ()

View File

@ -5,3 +5,6 @@ include mmdeploy/backend/ncnn/*.pyd
include mmdeploy/lib/*.so
include mmdeploy/lib/*.dll
include mmdeploy/lib/*.pyd
include mmdeploy/backend/torchscript/*.so
include mmdeploy/backend/torchscript/*.dll
include mmdeploy/backend/torchscript/*.pyd

View File

@ -4,7 +4,7 @@
<div align="center">
<b><font size="5">OpenMMLab website</font></b>
<sup>
<a href="https://openmmlab.com">
<a href="https://openmmlab.com">
<i><font size="4">HOT</font></i>
</a>
</sup>
@ -55,9 +55,9 @@ The currently supported codebases and models are as follows, and more will be in
Models can be exported and run in the following backends, and more will be compatible
| ONNX Runtime | TensorRT | ppl.nn | ncnn | OpenVINO | LibTorch | more |
| ------------ | -------- | ------ | ---- | -------- | -------- | ---------------------------------------------- |
| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | [benchmark](docs/en/03-benchmark/benchmark.md) |
| ONNX Runtime | TensorRT | ppl.nn | ncnn | OpenVINO | LibTorch | snpe | more |
| ------------ | -------- | ------ | ---- | -------- | -------- | ---- | ---------------------------------------------- |
| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | [benchmark](docs/en/03-benchmark/benchmark.md) |
### Efficient and scalable C/C++ SDK Framework
@ -73,6 +73,7 @@ Please read [getting_started.md](docs/en/get_started.md) for the basic usage of
- [Build for Win10](docs/en/01-how-to-build/windows.md)
- [Build for Android](docs/en/01-how-to-build/android.md)
- [Build for Jetson](docs/en/01-how-to-build/jetsons.md)
- [Build for SNPE](docs/en/01-how-to-build/snpe.md)
- User Guide
- [How to convert model](docs/en/02-how-to-run/convert_model.md)
- [How to write config](docs/en/02-how-to-run/write_config.md)

View File

@ -53,9 +53,9 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
### 支持多种推理后端
| ONNX Runtime | TensorRT | ppl.nn | ncnn | OpenVINO | more |
| ------------ | -------- | ------ | ---- | -------- | ------------------------------------------------- |
| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | [benchmark](docs/zh_cn/03-benchmark/benchmark.md) |
| ONNX Runtime | TensorRT | ppl.nn | ncnn | OpenVINO | LibTorch | snpe | more |
| ------------ | -------- | ------ | ---- | -------- | -------- | ---- | ------------------------------------------------- |
| ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | [benchmark](docs/zh_cn/03-benchmark/benchmark.md) |
### SDK 可高度定制化
@ -71,6 +71,7 @@ MMDeploy 是 [OpenMMLab](https://openmmlab.com/) 模型部署工具箱,**为
- [Build for Win10](docs/zh_cn/01-how-to-build/windows.md)
- [Build for Android](docs/zh_cn/01-how-to-build/android.md)
- [Build for Jetson](docs/en/01-how-to-build/jetsons.md)
- [Build for SNPE](docs/zh_cn/01-how-to-build/snpe.md)
- 使用
- [把模型转换到推理 Backend](docs/zh_cn/02-how-to-run/convert_model.md)
- [配置转换参数](docs/zh_cn/02-how-to-run/write_config.md)

View File

@ -126,10 +126,16 @@ function (mmdeploy_load_static NAME)
target_link_libraries(${NAME} PRIVATE ${ARGN})
else ()
_mmdeploy_flatten_modules(_MODULE_LIST ${ARGN})
target_link_libraries(${NAME} PRIVATE
-Wl,--whole-archive
${_MODULE_LIST}
-Wl,--no-whole-archive)
if (APPLE)
foreach (module IN LISTS _MODULE_LIST)
target_link_libraries(${NAME} PRIVATE -force_load ${module})
endforeach ()
else ()
target_link_libraries(${NAME} PRIVATE
-Wl,--whole-archive
${_MODULE_LIST}
-Wl,--no-whole-archive)
endif ()
endif ()
endfunction ()
@ -158,6 +164,8 @@ function (mmdeploy_load_dynamic NAME)
mmdeploy_add_module(${_LOADER_NAME} STATIC EXCLUDE ${_LOADER_PATH})
mmdeploy_load_static(${NAME} ${_LOADER_NAME})
elseif (APPLE)
target_link_libraries(${NAME} PRIVATE ${_MODULE_LIST})
else ()
target_link_libraries(${NAME} PRIVATE
-Wl,--no-as-needed

View File

@ -10,6 +10,11 @@ set(MMDEPLOY_TARGET_DEVICES @MMDEPLOY_TARGET_DEVICES@)
set(MMDEPLOY_TARGET_BACKENDS @MMDEPLOY_TARGET_BACKENDS@)
set(MMDEPLOY_BUILD_TYPE @CMAKE_BUILD_TYPE@)
set(MMDEPLOY_BUILD_SHARED @MMDEPLOY_SHARED_LIBS@)
set(MMDEPLOY_BUILD_SDK_CXX_API @MMDEPLOY_BUILD_SDK_CXX_API@)
set(MMDEPLOY_BUILD_SDK_MONOLITHIC @MMDEPLOY_BUILD_SDK_MONOLITHIC@)
set(MMDEPLOY_VERSION_MAJOR @MMDEPLOY_VERSION_MAJOR@)
set(MMDEPLOY_VERSION_MINOR @MMDEPLOY_VERSION_MINOR@)
set(MMDEPLOY_VERSION_PATCH @MMDEPLOY_VERSION_PATCH@)
if (NOT MMDEPLOY_BUILD_SHARED)
if ("cuda" IN_LIST MMDEPLOY_TARGET_DEVICES)

View File

@ -0,0 +1 @@
backend_config = dict(type='snpe')

View File

@ -6,4 +6,5 @@ onnx_config = dict(
save_file='end2end.onnx',
input_names=['input'],
output_names=['output'],
input_shape=None)
input_shape=None,
optimize=True)

View File

@ -0,0 +1,3 @@
_base_ = ['./classification_static.py', '../_base_/backends/snpe.py']
onnx_config = dict(input_shape=None)

View File

@ -0,0 +1,13 @@
_base_ = ['./classification_static.py', '../_base_/backends/tensorrt-fp16.py']
onnx_config = dict(input_shape=[384, 384])
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 384, 384],
opt_shape=[1, 3, 384, 384],
max_shape=[1, 3, 384, 384])))
])

View File

@ -8,7 +8,7 @@ backend_config = dict(
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 160, 160],
min_shape=[1, 3, 64, 64],
opt_shape=[1, 3, 608, 608],
max_shape=[1, 3, 608, 608])))
])

View File

@ -8,7 +8,7 @@ backend_config = dict(
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 160, 160],
min_shape=[1, 3, 64, 64],
opt_shape=[1, 3, 608, 608],
max_shape=[1, 3, 608, 608])))
])

View File

@ -6,7 +6,7 @@ backend_config = dict(
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 160, 160],
min_shape=[1, 3, 64, 64],
opt_shape=[1, 3, 608, 608],
max_shape=[1, 3, 608, 608])))
])

View File

@ -0,0 +1,12 @@
_base_ = ['./detection_onnxruntime_static.py']
onnx_config = dict(input_shape=[608, 608])
partition_config = dict(
type='yolov3_partition',
apply_marks=True,
partition_cfg=[
dict(
save_file='yolov3.onnx',
start=['detector_forward:input'],
end=['yolo_head:input'])
])

View File

@ -0,0 +1,2 @@
_base_ = ['./super-resolution_static.py', '../../_base_/backends/snpe.py']
onnx_config = dict(input_shape=[256, 256])

View File

@ -0,0 +1,3 @@
_base_ = ['./text-detection_static.py', '../../_base_/backends/snpe.py']
onnx_config = dict(input_shape=None)

View File

@ -0,0 +1,3 @@
_base_ = ['./pose-detection_static.py', '../_base_/backends/snpe.py']
onnx_config = dict(input_shape=[256, 256])

View File

@ -0,0 +1,23 @@
_base_ = ['./pose-detection_static.py', '../_base_/backends/tensorrt.py']
onnx_config = dict(
input_shape=[192, 256],
dynamic_axes={
'input': {
0: 'batch',
},
'output': {
0: 'batch'
}
})
backend_config = dict(
common_config=dict(max_workspace_size=1 << 30),
model_inputs=[
dict(
input_shapes=dict(
input=dict(
min_shape=[1, 3, 256, 192],
opt_shape=[2, 3, 256, 192],
max_shape=[4, 3, 256, 192])))
])

View File

@ -6,4 +6,5 @@ codebase_config = dict(
score_threshold=0.05,
iou_threshold=0.1,
pre_top_k=3000,
keep_top_k=2000))
keep_top_k=2000,
max_output_boxes_per_class=2000))

View File

@ -0,0 +1,3 @@
_base_ = ['./segmentation_static.py', '../_base_/backends/snpe.py']
onnx_config = dict(input_shape=[1024, 512])

View File

@ -16,5 +16,6 @@ if (MMDEPLOY_BUILD_SDK)
add_subdirectory(preprocess)
add_subdirectory(net)
add_subdirectory(codebase)
add_subdirectory(apis/c)
add_subdirectory(apis/c/mmdeploy)
add_subdirectory(apis/cxx)
endif ()

View File

@ -1,49 +0,0 @@
# Copyright (c) OpenMMLab. All rights reserved.
project(capis)
set(COMMON_LIST
common
model
executor
pipeline)
set(TASK_LIST ${MMDEPLOY_TASKS})
foreach (TASK ${COMMON_LIST})
set(TARGET_NAME mmdeploy_${TASK})
mmdeploy_add_library(${TARGET_NAME} ${CMAKE_CURRENT_SOURCE_DIR}/${TASK}.cpp)
target_link_libraries(${TARGET_NAME} PRIVATE mmdeploy::core)
target_include_directories(${TARGET_NAME} PUBLIC
$<INSTALL_INTERFACE:include/mmdeploy/apis/c>)
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/${TASK}.h
DESTINATION include/mmdeploy/apis/c)
endforeach ()
target_link_libraries(mmdeploy_executor PUBLIC
mmdeploy_common)
target_link_libraries(mmdeploy_pipeline PUBLIC
mmdeploy_executor mmdeploy_model mmdeploy_common)
foreach (TASK ${TASK_LIST})
set(TARGET_NAME mmdeploy_${TASK})
mmdeploy_add_library(${TARGET_NAME} ${CMAKE_CURRENT_SOURCE_DIR}/${TASK}.cpp)
target_link_libraries(${TARGET_NAME} PRIVATE
mmdeploy_pipeline mmdeploy::core)
target_include_directories(${TARGET_NAME} PUBLIC
$<INSTALL_INTERFACE:include/mmdeploy/apis/c>)
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/${TASK}.h
DESTINATION include/mmdeploy/apis/c)
endforeach ()
if (MMDEPLOY_BUILD_SDK_CSHARP_API)
# build MMDeployExtern.dll just for csharp nuget package.
# no Installation for c/c++ package.
file(GLOB SRCS "*.c" "*.cpp")
add_library(MMDeployExtern SHARED ${SRCS})
target_compile_definitions(MMDeployExtern PRIVATE -DMMDEPLOY_API_EXPORTS=1)
mmdeploy_load_static(MMDeployExtern MMDeployStaticModules)
mmdeploy_load_dynamic(MMDeployExtern MMDeployDynamicModules)
target_link_libraries(MMDeployExtern PRIVATE MMDeployLibs)
endif ()

View File

@ -1,101 +0,0 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_COMMON_H
#define MMDEPLOY_COMMON_H
#include <stdint.h>
#ifndef MMDEPLOY_EXPORT
#ifdef _MSC_VER
#define MMDEPLOY_EXPORT __declspec(dllexport)
#else
#define MMDEPLOY_EXPORT __attribute__((visibility("default")))
#endif
#endif
#ifndef MMDEPLOY_API
#ifdef MMDEPLOY_API_EXPORTS
#define MMDEPLOY_API MMDEPLOY_EXPORT
#else
#define MMDEPLOY_API
#endif
#endif
// clang-format off
typedef enum {
MM_BGR,
MM_RGB,
MM_GRAYSCALE,
MM_NV12,
MM_NV21,
MM_BGRA,
MM_UNKNOWN_PIXEL_FORMAT
} mm_pixel_format_t;
typedef enum {
MM_FLOAT,
MM_HALF,
MM_INT8,
MM_INT32,
MM_UNKNOWN_DATA_TYPE
} mm_data_type_t;
enum mm_status_t {
MM_SUCCESS = 0,
MM_E_INVALID_ARG = 1,
MM_E_NOT_SUPPORTED = 2,
MM_E_OUT_OF_RANGE = 3,
MM_E_OUT_OF_MEMORY = 4,
MM_E_FILE_NOT_EXIST = 5,
MM_E_FAIL = 6,
MM_E_UNKNOWN = -1,
};
// clang-format on
typedef void* mm_handle_t;
typedef void* mm_model_t;
typedef struct mm_mat_t {
uint8_t* data;
int height;
int width;
int channel;
mm_pixel_format_t format;
mm_data_type_t type;
} mm_mat_t;
typedef struct mm_rect_t {
float left;
float top;
float right;
float bottom;
} mm_rect_t;
typedef struct mm_pointi_t {
int x;
int y;
} mm_pointi_t;
typedef struct mm_pointf_t {
float x;
float y;
} mm_pointf_t;
typedef struct mmdeploy_value* mmdeploy_value_t;
#if __cplusplus
extern "C" {
#endif
MMDEPLOY_API mmdeploy_value_t mmdeploy_value_copy(mmdeploy_value_t input);
MMDEPLOY_API int mmdeploy_value_destroy(mmdeploy_value_t value);
#if __cplusplus
}
#endif
#endif // MMDEPLOY_COMMON_H

View File

@ -0,0 +1,80 @@
# Copyright (c) OpenMMLab. All rights reserved.
project(capis)
set(CAPI_OBJS)
macro(add_object name)
add_library(${name} OBJECT ${ARGN})
set_target_properties(${name} PROPERTIES POSITION_INDEPENDENT_CODE 1)
target_compile_definitions(${name} PRIVATE -DMMDEPLOY_API_EXPORTS=1)
if (NOT MSVC)
target_compile_options(${name} PRIVATE $<$<COMPILE_LANGUAGE:CXX>:-fvisibility=hidden>)
endif ()
target_link_libraries(${name} PRIVATE mmdeploy::core)
set(CAPI_OBJS ${CAPI_OBJS} ${name})
mmdeploy_export(${name})
endmacro()
set(COMMON_LIST
common
model
executor
pipeline)
set(TASK_LIST ${MMDEPLOY_TASKS})
foreach (TASK ${COMMON_LIST})
set(TARGET_NAME mmdeploy_${TASK})
set(OBJECT_NAME mmdeploy_${TASK}_obj)
add_object(${OBJECT_NAME} ${CMAKE_CURRENT_SOURCE_DIR}/${TASK}.cpp)
mmdeploy_add_library(${TARGET_NAME})
target_link_libraries(${TARGET_NAME} PRIVATE ${OBJECT_NAME})
target_include_directories(${TARGET_NAME} PUBLIC
$<INSTALL_INTERFACE:include>)
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/${TASK}.h
DESTINATION include/mmdeploy)
endforeach ()
target_link_libraries(mmdeploy_executor PUBLIC
mmdeploy_common)
target_link_libraries(mmdeploy_pipeline PUBLIC
mmdeploy_executor mmdeploy_model mmdeploy_common)
foreach (TASK ${TASK_LIST})
set(TARGET_NAME mmdeploy_${TASK})
set(OBJECT_NAME mmdeploy_${TASK}_obj)
add_object(${OBJECT_NAME} ${CMAKE_CURRENT_SOURCE_DIR}/${TASK}.cpp)
mmdeploy_add_library(${TARGET_NAME})
target_link_libraries(${TARGET_NAME} PRIVATE ${OBJECT_NAME}
mmdeploy_pipeline)
target_include_directories(${TARGET_NAME} PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/..>
$<INSTALL_INTERFACE:include>)
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/${TASK}.h
DESTINATION include/mmdeploy)
endforeach ()
install(DIRECTORY ${CMAKE_SOURCE_DIR}/demo/csrc/ DESTINATION example/cpp
FILES_MATCHING
PATTERN "*.cpp"
PATTERN "CMakeLists.txt"
)
if (MMDEPLOY_BUILD_SDK_CSHARP_API OR MMDEPLOY_BUILD_SDK_MONOLITHIC)
add_library(mmdeploy SHARED)
mmdeploy_load_static(mmdeploy MMDeployStaticModules)
mmdeploy_load_dynamic(mmdeploy MMDeployDynamicModules)
target_link_libraries(mmdeploy PRIVATE ${CAPI_OBJS})
target_include_directories(mmdeploy PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/..>
$<INSTALL_INTERFACE:include>)
set(MMDEPLOY_VERSION ${MMDEPLOY_VERSION_MAJOR}
.${MMDEPLOY_VERSION_MINOR}
.${MMDEPLOY_VERSION_PATCH})
string(REPLACE ";" "" MMDEPLOY_VERSION ${MMDEPLOY_VERSION})
set_target_properties(mmdeploy PROPERTIES
VERSION ${MMDEPLOY_VERSION}
SOVERSION ${MMDEPLOY_VERSION_MAJOR})
mmdeploy_export(mmdeploy)
endif ()

View File

@ -4,14 +4,14 @@
#include <numeric>
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/handle.h"
#include "mmdeploy/apis/c/pipeline.h"
#include "common_internal.h"
#include "handle.h"
#include "mmdeploy/archive/value_archive.h"
#include "mmdeploy/codebase/mmcls/mmcls.h"
#include "mmdeploy/core/device.h"
#include "mmdeploy/core/graph.h"
#include "mmdeploy/core/utils/formatter.h"
#include "pipeline.h"
using namespace mmdeploy;
using namespace std;
@ -43,72 +43,77 @@ Value& config_template() {
return v;
}
int mmdeploy_classifier_create_impl(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
int mmdeploy_classifier_create_impl(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info,
mmdeploy_classifier_t* classifier) {
auto config = config_template();
config["pipeline"]["tasks"][0]["params"]["model"] = *static_cast<Model*>(model);
config["pipeline"]["tasks"][0]["params"]["model"] = *Cast(model);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info, handle);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info,
(mmdeploy_pipeline_t*)classifier);
}
} // namespace
int mmdeploy_classifier_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle) {
return mmdeploy_classifier_create_impl(model, device_name, device_id, nullptr, handle);
int mmdeploy_classifier_create(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_classifier_t* classifier) {
return mmdeploy_classifier_create_impl(model, device_name, device_id, nullptr, classifier);
}
int mmdeploy_classifier_create_v2(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
return mmdeploy_classifier_create_impl(model, device_name, device_id, exec_info, handle);
int mmdeploy_classifier_create_v2(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info,
mmdeploy_classifier_t* classifier) {
return mmdeploy_classifier_create_impl(model, device_name, device_id, exec_info, classifier);
}
int mmdeploy_classifier_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle) {
mm_model_t model{};
int device_id, mmdeploy_classifier_t* classifier) {
mmdeploy_model_t model{};
if (auto ec = mmdeploy_model_create_by_path(model_path, &model)) {
return ec;
}
auto ec = mmdeploy_classifier_create_impl(model, device_name, device_id, nullptr, handle);
auto ec = mmdeploy_classifier_create_impl(model, device_name, device_id, nullptr, classifier);
mmdeploy_model_destroy(model);
return ec;
}
int mmdeploy_classifier_create_input(const mm_mat_t* mats, int mat_count, mmdeploy_value_t* value) {
int mmdeploy_classifier_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* value) {
return mmdeploy_common_create_input(mats, mat_count, value);
}
int mmdeploy_classifier_apply(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
mm_class_t** results, int** result_count) {
int mmdeploy_classifier_apply(mmdeploy_classifier_t classifier, const mmdeploy_mat_t* mats,
int mat_count, mmdeploy_classification_t** results,
int** result_count) {
wrapped<mmdeploy_value_t> input;
if (auto ec = mmdeploy_classifier_create_input(mats, mat_count, input.ptr())) {
return ec;
}
wrapped<mmdeploy_value_t> output;
if (auto ec = mmdeploy_classifier_apply_v2(handle, input, output.ptr())) {
if (auto ec = mmdeploy_classifier_apply_v2(classifier, input, output.ptr())) {
return ec;
}
if (auto ec = mmdeploy_classifier_get_result(output, results, result_count)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
int mmdeploy_classifier_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
int mmdeploy_classifier_apply_v2(mmdeploy_classifier_t classifier, mmdeploy_value_t input,
mmdeploy_value_t* output) {
return mmdeploy_pipeline_apply(handle, input, output);
return mmdeploy_pipeline_apply((mmdeploy_pipeline_t)classifier, input, output);
}
int mmdeploy_classifier_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
int mmdeploy_classifier_apply_async(mmdeploy_classifier_t classifier, mmdeploy_sender_t input,
mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async(handle, input, output);
return mmdeploy_pipeline_apply_async((mmdeploy_pipeline_t)classifier, input, output);
}
int mmdeploy_classifier_get_result(mmdeploy_value_t output, mm_class_t** results,
int mmdeploy_classifier_get_result(mmdeploy_value_t output, mmdeploy_classification_t** results,
int** result_count) {
if (!output || !results || !result_count) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
Value& value = Cast(output)->front();
@ -127,7 +132,8 @@ int mmdeploy_classifier_get_result(mmdeploy_value_t output, mm_class_t** results
std::unique_ptr<int[]> result_count_data(new int[_result_count.size()]{});
std::copy(_result_count.begin(), _result_count.end(), result_count_data.get());
std::unique_ptr<mm_class_t[]> result_data(new mm_class_t[total]{});
std::unique_ptr<mmdeploy_classification_t[]> result_data(
new mmdeploy_classification_t[total]{});
auto result_ptr = result_data.get();
for (const auto& cls_output : classify_outputs) {
for (const auto& label : cls_output.labels) {
@ -140,23 +146,21 @@ int mmdeploy_classifier_get_result(mmdeploy_value_t output, mm_class_t** results
*result_count = result_count_data.release();
*results = result_data.release();
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("unhandled exception: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
void mmdeploy_classifier_release_result(mm_class_t* results, const int* result_count, int count) {
void mmdeploy_classifier_release_result(mmdeploy_classification_t* results, const int* result_count,
int count) {
delete[] results;
delete[] result_count;
}
void mmdeploy_classifier_destroy(mm_handle_t handle) {
if (handle != nullptr) {
auto classifier = static_cast<AsyncHandle*>(handle);
delete classifier;
}
void mmdeploy_classifier_destroy(mmdeploy_classifier_t classifier) {
mmdeploy_pipeline_destroy((mmdeploy_pipeline_t)classifier);
}

View File

@ -10,15 +10,18 @@
#include "common.h"
#include "executor.h"
#include "model.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct mm_class_t {
typedef struct mmdeploy_classification_t {
int label_id;
float score;
} mm_class_t;
} mmdeploy_classification_t;
typedef struct mmdeploy_classifier* mmdeploy_classifier_t;
/**
* @brief Create classifier's handle
@ -26,29 +29,30 @@ typedef struct mm_class_t {
* \ref mmdeploy_model_create_by_path or \ref mmdeploy_model_create in \ref model.h
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle instance of a classifier, which must be destroyed
* @param[out] classifier instance of a classifier, which must be destroyed
* by \ref mmdeploy_classifier_destroy
* @return status of creating classifier's handle
*/
MMDEPLOY_API int mmdeploy_classifier_create(mm_model_t model, const char* device_name,
int device_id, mm_handle_t* handle);
MMDEPLOY_API int mmdeploy_classifier_create(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_classifier_t* classifier);
/**
* @brief Create classifier's handle
* @param[in] model_path path of mmclassification sdk model exported by mmdeploy model converter
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle instance of a classifier, which must be destroyed
* @param[out] classifier instance of a classifier, which must be destroyed
* by \ref mmdeploy_classifier_destroy
* @return status of creating classifier's handle
*/
MMDEPLOY_API int mmdeploy_classifier_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle);
int device_id,
mmdeploy_classifier_t* classifier);
/**
* @brief Use classifier created by \ref mmdeploy_classifier_create_by_path to get label
* information of each image in a batch
* @param[in] handle classifier's handle created by \ref mmdeploy_classifier_create_by_path
* @param[in] classifier classifier's handle created by \ref mmdeploy_classifier_create_by_path
* @param[in] mats a batch of images
* @param[in] mat_count number of images in the batch
* @param[out] results a linear buffer to save classification results of each
@ -58,8 +62,9 @@ MMDEPLOY_API int mmdeploy_classifier_create_by_path(const char* model_path, cons
* mmdeploy_classifier_release_result
* @return status of inference
*/
MMDEPLOY_API int mmdeploy_classifier_apply(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
mm_class_t** results, int** result_count);
MMDEPLOY_API int mmdeploy_classifier_apply(mmdeploy_classifier_t classifier,
const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_classification_t** results, int** result_count);
/**
* @brief Release the inference result buffer created \ref mmdeploy_classifier_apply
@ -67,14 +72,14 @@ MMDEPLOY_API int mmdeploy_classifier_apply(mm_handle_t handle, const mm_mat_t* m
* @param[in] result_count \p results size buffer
* @param[in] count length of \p result_count
*/
MMDEPLOY_API void mmdeploy_classifier_release_result(mm_class_t* results, const int* result_count,
int count);
MMDEPLOY_API void mmdeploy_classifier_release_result(mmdeploy_classification_t* results,
const int* result_count, int count);
/**
* @brief Destroy classifier's handle
* @param[in] handle classifier's handle created by \ref mmdeploy_classifier_create_by_path
* @param[in] classifier classifier's handle created by \ref mmdeploy_classifier_create_by_path
*/
MMDEPLOY_API void mmdeploy_classifier_destroy(mm_handle_t handle);
MMDEPLOY_API void mmdeploy_classifier_destroy(mmdeploy_classifier_t classifier);
/******************************************************************************
* Experimental asynchronous APIs */
@ -83,9 +88,9 @@ MMDEPLOY_API void mmdeploy_classifier_destroy(mm_handle_t handle);
* @brief Same as \ref mmdeploy_classifier_create, but allows to control execution context of tasks
* via exec_info
*/
MMDEPLOY_API int mmdeploy_classifier_create_v2(mm_model_t model, const char* device_name,
MMDEPLOY_API int mmdeploy_classifier_create_v2(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mm_handle_t* handle);
mmdeploy_classifier_t* classifier);
/**
* @brief Pack classifier inputs into mmdeploy_value_t
@ -94,24 +99,25 @@ MMDEPLOY_API int mmdeploy_classifier_create_v2(mm_model_t model, const char* dev
* @param[out] value the packed value
* @return status of the operation
*/
MMDEPLOY_API int mmdeploy_classifier_create_input(const mm_mat_t* mats, int mat_count,
MMDEPLOY_API int mmdeploy_classifier_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* value);
/**
* @brief Same as \ref mmdeploy_classifier_apply, but input and output are packed in \ref
* mmdeploy_value_t.
*/
MMDEPLOY_API int mmdeploy_classifier_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
mmdeploy_value_t* output);
MMDEPLOY_API int mmdeploy_classifier_apply_v2(mmdeploy_classifier_t classifier,
mmdeploy_value_t input, mmdeploy_value_t* output);
/**
* @brief Apply classifier asynchronously
* @param[in] handle handle of the classifier
* @param[in] classifier handle of the classifier
* @param[in] input input sender that will be consumed by the operation
* @param[out] output output sender
* @return status of the operation
*/
MMDEPLOY_API int mmdeploy_classifier_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
MMDEPLOY_API int mmdeploy_classifier_apply_async(mmdeploy_classifier_t classifier,
mmdeploy_sender_t input,
mmdeploy_sender_t* output);
/**
@ -123,7 +129,8 @@ MMDEPLOY_API int mmdeploy_classifier_apply_async(mm_handle_t handle, mmdeploy_se
* released by \ref mmdeploy_classifier_release_result
* @return status of the operation
*/
MMDEPLOY_API int mmdeploy_classifier_get_result(mmdeploy_value_t output, mm_class_t** results,
MMDEPLOY_API int mmdeploy_classifier_get_result(mmdeploy_value_t output,
mmdeploy_classification_t** results,
int** result_count);
#ifdef __cplusplus

View File

@ -1,14 +1,14 @@
#include "common.h"
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/handle.h"
#include "common_internal.h"
#include "handle.h"
#include "mmdeploy/core/mat.h"
mmdeploy_value_t mmdeploy_value_copy(mmdeploy_value_t input) {
if (!input) {
mmdeploy_value_t mmdeploy_value_copy(mmdeploy_value_t value) {
if (!value) {
return nullptr;
}
return Guard([&] { return Take(Value(*Cast(input))); });
return Guard([&] { return Take(Value(*Cast(value))); });
}
int mmdeploy_value_destroy(mmdeploy_value_t value) {
@ -16,9 +16,10 @@ int mmdeploy_value_destroy(mmdeploy_value_t value) {
return 0;
}
int mmdeploy_common_create_input(const mm_mat_t* mats, int mat_count, mmdeploy_value_t* value) {
int mmdeploy_common_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* value) {
if (mat_count && mats == nullptr) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
auto input = std::make_unique<Value>(Value{Value::kArray});
@ -33,5 +34,5 @@ int mmdeploy_common_create_input(const mm_mat_t* mats, int mat_count, mmdeploy_v
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}

View File

@ -0,0 +1,92 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_COMMON_H
#define MMDEPLOY_COMMON_H
#include <stdint.h> // NOLINT
#ifndef MMDEPLOY_EXPORT
#ifdef _MSC_VER
#define MMDEPLOY_EXPORT __declspec(dllexport)
#else
#define MMDEPLOY_EXPORT __attribute__((visibility("default")))
#endif
#endif
#ifndef MMDEPLOY_API
#ifdef MMDEPLOY_API_EXPORTS
#define MMDEPLOY_API MMDEPLOY_EXPORT
#else
#define MMDEPLOY_API
#endif
#endif
// clang-format off
typedef enum mmdeploy_pixel_format_t{
MMDEPLOY_PIXEL_FORMAT_BGR,
MMDEPLOY_PIXEL_FORMAT_RGB,
MMDEPLOY_PIXEL_FORMAT_GRAYSCALE,
MMDEPLOY_PIXEL_FORMAT_NV12,
MMDEPLOY_PIXEL_FORMAT_NV21,
MMDEPLOY_PIXEL_FORMAT_BGRA,
MMDEPLOY_PIXEL_FORMAT_COUNT
} mmdeploy_pixel_format_t;
typedef enum mmdeploy_data_type_t{
MMDEPLOY_DATA_TYPE_FLOAT,
MMDEPLOY_DATA_TYPE_HALF,
MMDEPLOY_DATA_TYPE_UINT8,
MMDEPLOY_DATA_TYPE_INT32,
MMDEPLOY_DATA_TYPE_COUNT
} mmdeploy_data_type_t;
typedef enum mmdeploy_status_t {
MMDEPLOY_SUCCESS = 0,
MMDEPLOY_E_INVALID_ARG = 1,
MMDEPLOY_E_NOT_SUPPORTED = 2,
MMDEPLOY_E_OUT_OF_RANGE = 3,
MMDEPLOY_E_OUT_OF_MEMORY = 4,
MMDEPLOY_E_FILE_NOT_EXIST = 5,
MMDEPLOY_E_FAIL = 6,
MMDEPLOY_STATUS_COUNT = 7
} mmdeploy_status_t;
// clang-format on
typedef struct mmdeploy_mat_t {
uint8_t* data;
int height;
int width;
int channel;
mmdeploy_pixel_format_t format;
mmdeploy_data_type_t type;
} mmdeploy_mat_t;
typedef struct mmdeploy_rect_t {
float left;
float top;
float right;
float bottom;
} mmdeploy_rect_t;
typedef struct mmdeploy_point_t {
float x;
float y;
} mmdeploy_point_t;
typedef struct mmdeploy_value* mmdeploy_value_t;
#if __cplusplus
extern "C" {
#endif
MMDEPLOY_API mmdeploy_value_t mmdeploy_value_copy(mmdeploy_value_t value);
MMDEPLOY_API int mmdeploy_value_destroy(mmdeploy_value_t value);
#if __cplusplus
}
#endif
#endif // MMDEPLOY_COMMON_H

View File

@ -3,9 +3,11 @@
#ifndef MMDEPLOY_CSRC_APIS_C_COMMON_INTERNAL_H_
#define MMDEPLOY_CSRC_APIS_C_COMMON_INTERNAL_H_
#include "mmdeploy/apis/c/common.h"
#include "mmdeploy/apis/c/model.h"
#include "common.h"
#include "handle.h"
#include "mmdeploy/core/value.h"
#include "model.h"
#include "pipeline.h"
using namespace mmdeploy;
@ -25,6 +27,16 @@ mmdeploy_value_t Take(Value v) {
return Cast(new Value(std::move(v))); // NOLINT
}
mmdeploy_pipeline_t Cast(AsyncHandle* pipeline) {
return reinterpret_cast<mmdeploy_pipeline_t>(pipeline);
}
AsyncHandle* Cast(mmdeploy_pipeline_t pipeline) { return reinterpret_cast<AsyncHandle*>(pipeline); }
mmdeploy_model_t Cast(Model* model) { return reinterpret_cast<mmdeploy_model_t>(model); }
Model* Cast(mmdeploy_model_t model) { return reinterpret_cast<Model*>(model); }
template <typename F>
std::invoke_result_t<F> Guard(F f) {
try {
@ -80,7 +92,7 @@ class wrapped<T, std::void_t<decltype(Cast(T{}))> > {
} // namespace
MMDEPLOY_API int mmdeploy_common_create_input(const mm_mat_t* mats, int mat_count,
MMDEPLOY_API int mmdeploy_common_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* value);
#endif // MMDEPLOY_CSRC_APIS_C_COMMON_INTERNAL_H_

View File

@ -4,16 +4,16 @@
#include <numeric>
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/executor_internal.h"
#include "mmdeploy/apis/c/model.h"
#include "mmdeploy/apis/c/pipeline.h"
#include "common_internal.h"
#include "executor_internal.h"
#include "mmdeploy/archive/value_archive.h"
#include "mmdeploy/codebase/mmdet/mmdet.h"
#include "mmdeploy/core/device.h"
#include "mmdeploy/core/model.h"
#include "mmdeploy/core/utils/formatter.h"
#include "mmdeploy/core/value.h"
#include "model.h"
#include "pipeline.h"
using namespace std;
using namespace mmdeploy;
@ -45,72 +45,74 @@ Value& config_template() {
return v;
}
int mmdeploy_detector_create_impl(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
int mmdeploy_detector_create_impl(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mmdeploy_detector_t* detector) {
auto config = config_template();
config["pipeline"]["tasks"][0]["params"]["model"] = *static_cast<Model*>(model);
config["pipeline"]["tasks"][0]["params"]["model"] = *Cast(model);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info, handle);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info,
(mmdeploy_pipeline_t*)detector);
}
} // namespace
int mmdeploy_detector_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle) {
return mmdeploy_detector_create_impl(model, device_name, device_id, nullptr, handle);
int mmdeploy_detector_create(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_detector_t* detector) {
return mmdeploy_detector_create_impl(model, device_name, device_id, nullptr, detector);
}
int mmdeploy_detector_create_v2(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
return mmdeploy_detector_create_impl(model, device_name, device_id, exec_info, handle);
int mmdeploy_detector_create_v2(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mmdeploy_detector_t* detector) {
return mmdeploy_detector_create_impl(model, device_name, device_id, exec_info, detector);
}
int mmdeploy_detector_create_by_path(const char* model_path, const char* device_name, int device_id,
mm_handle_t* handle) {
mm_model_t model{};
mmdeploy_detector_t* detector) {
mmdeploy_model_t model{};
if (auto ec = mmdeploy_model_create_by_path(model_path, &model)) {
return ec;
}
auto ec = mmdeploy_detector_create_impl(model, device_name, device_id, nullptr, handle);
auto ec = mmdeploy_detector_create_impl(model, device_name, device_id, nullptr, detector);
mmdeploy_model_destroy(model);
return ec;
}
int mmdeploy_detector_create_input(const mm_mat_t* mats, int mat_count, mmdeploy_value_t* input) {
int mmdeploy_detector_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* input) {
return mmdeploy_common_create_input(mats, mat_count, input);
}
int mmdeploy_detector_apply(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
mm_detect_t** results, int** result_count) {
int mmdeploy_detector_apply(mmdeploy_detector_t detector, const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_detection_t** results, int** result_count) {
wrapped<mmdeploy_value_t> input;
if (auto ec = mmdeploy_detector_create_input(mats, mat_count, input.ptr())) {
return ec;
}
wrapped<mmdeploy_value_t> output;
if (auto ec = mmdeploy_detector_apply_v2(handle, input, output.ptr())) {
if (auto ec = mmdeploy_detector_apply_v2(detector, input, output.ptr())) {
return ec;
}
if (auto ec = mmdeploy_detector_get_result(output, results, result_count)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
int mmdeploy_detector_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
int mmdeploy_detector_apply_v2(mmdeploy_detector_t detector, mmdeploy_value_t input,
mmdeploy_value_t* output) {
return mmdeploy_pipeline_apply(handle, input, output);
return mmdeploy_pipeline_apply((mmdeploy_pipeline_t)detector, input, output);
}
int mmdeploy_detector_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
int mmdeploy_detector_apply_async(mmdeploy_detector_t detector, mmdeploy_sender_t input,
mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async(handle, input, output);
return mmdeploy_pipeline_apply_async((mmdeploy_pipeline_t)detector, input, output);
}
int mmdeploy_detector_get_result(mmdeploy_value_t output, mm_detect_t** results,
int mmdeploy_detector_get_result(mmdeploy_value_t output, mmdeploy_detection_t** results,
int** result_count) {
if (!output || !results || !result_count) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
Value& value = Cast(output)->front();
@ -127,11 +129,11 @@ int mmdeploy_detector_get_result(mmdeploy_value_t output, mm_detect_t** results,
auto result_count_ptr = result_count_data.get();
std::copy(_result_count.begin(), _result_count.end(), result_count_data.get());
auto deleter = [&](mm_detect_t* p) {
auto deleter = [&](mmdeploy_detection_t* p) {
mmdeploy_detector_release_result(p, result_count_ptr, (int)detector_outputs.size());
};
std::unique_ptr<mm_detect_t[], decltype(deleter)> result_data(new mm_detect_t[total]{},
deleter);
std::unique_ptr<mmdeploy_detection_t[], decltype(deleter)> result_data(
new mmdeploy_detection_t[total]{}, deleter);
// ownership transferred to result_data
result_count_data.release();
@ -146,7 +148,7 @@ int mmdeploy_detector_get_result(mmdeploy_value_t output, mm_detect_t** results,
auto mask_byte_size = detection.mask.byte_size();
if (mask_byte_size) {
auto& mask = detection.mask;
result_ptr->mask = new mm_instance_mask_t{};
result_ptr->mask = new mmdeploy_instance_mask_t{};
result_ptr->mask->data = new char[mask_byte_size];
result_ptr->mask->width = mask.width();
result_ptr->mask->height = mask.height();
@ -159,16 +161,17 @@ int mmdeploy_detector_get_result(mmdeploy_value_t output, mm_detect_t** results,
*result_count = result_count_ptr;
*results = result_data.release();
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("unhandled exception: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
void mmdeploy_detector_release_result(mm_detect_t* results, const int* result_count, int count) {
void mmdeploy_detector_release_result(mmdeploy_detection_t* results, const int* result_count,
int count) {
auto result_ptr = results;
for (int i = 0; i < count; ++i) {
for (int j = 0; j < result_count[i]; ++j, ++result_ptr) {
@ -182,4 +185,6 @@ void mmdeploy_detector_release_result(mm_detect_t* results, const int* result_co
delete[] result_count;
}
void mmdeploy_detector_destroy(mm_handle_t handle) { mmdeploy_pipeline_destroy(handle); }
void mmdeploy_detector_destroy(mmdeploy_detector_t detector) {
mmdeploy_pipeline_destroy((mmdeploy_pipeline_t)detector);
}

View File

@ -10,23 +10,26 @@
#include "common.h"
#include "executor.h"
#include "model.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct mm_instance_mask_t {
typedef struct mmdeploy_instance_mask_t {
char* data;
int height;
int width;
} mm_instance_mask_t;
} mmdeploy_instance_mask_t;
typedef struct mm_detect_t {
typedef struct mmdeploy_detection_t {
int label_id;
float score;
mm_rect_t bbox;
mm_instance_mask_t* mask;
} mm_detect_t;
mmdeploy_rect_t bbox;
mmdeploy_instance_mask_t* mask;
} mmdeploy_detection_t;
typedef struct mmdeploy_detector* mmdeploy_detector_t;
/**
* @brief Create detector's handle
@ -34,26 +37,26 @@ typedef struct mm_detect_t {
* \ref mmdeploy_model_create_by_path or \ref mmdeploy_model_create in \ref model.h
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle instance of a detector
* @param[out] detector instance of a detector
* @return status of creating detector's handle
*/
MMDEPLOY_API int mmdeploy_detector_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle);
MMDEPLOY_API int mmdeploy_detector_create(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_detector_t* detector);
/**
* @brief Create detector's handle
* @param[in] model_path path of mmdetection sdk model exported by mmdeploy model converter
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle instance of a detector
* @param[out] detector instance of a detector
* @return status of creating detector's handle
*/
MMDEPLOY_API int mmdeploy_detector_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle);
int device_id, mmdeploy_detector_t* detector);
/**
* @brief Apply detector to batch images and get their inference results
* @param[in] handle detector's handle created by \ref mmdeploy_detector_create_by_path
* @param[in] detector detector's handle created by \ref mmdeploy_detector_create_by_path
* @param[in] mats a batch of images
* @param[in] mat_count number of images in the batch
* @param[out] results a linear buffer to save detection results of each image. It must be released
@ -63,22 +66,23 @@ MMDEPLOY_API int mmdeploy_detector_create_by_path(const char* model_path, const
* mmdeploy_detector_release_result
* @return status of inference
*/
MMDEPLOY_API int mmdeploy_detector_apply(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
mm_detect_t** results, int** result_count);
MMDEPLOY_API int mmdeploy_detector_apply(mmdeploy_detector_t detector, const mmdeploy_mat_t* mats,
int mat_count, mmdeploy_detection_t** results,
int** result_count);
/** @brief Release the inference result buffer created by \ref mmdeploy_detector_apply
* @param[in] results detection results buffer
* @param[in] result_count \p results size buffer
* @param[in] count length of \p result_count
*/
MMDEPLOY_API void mmdeploy_detector_release_result(mm_detect_t* results, const int* result_count,
int count);
MMDEPLOY_API void mmdeploy_detector_release_result(mmdeploy_detection_t* results,
const int* result_count, int count);
/**
* @brief Destroy detector's handle
* @param[in] handle detector's handle created by \ref mmdeploy_detector_create_by_path
* @param[in] detector detector's handle created by \ref mmdeploy_detector_create_by_path
*/
MMDEPLOY_API void mmdeploy_detector_destroy(mm_handle_t handle);
MMDEPLOY_API void mmdeploy_detector_destroy(mmdeploy_detector_t detector);
/******************************************************************************
* Experimental asynchronous APIs */
@ -87,9 +91,9 @@ MMDEPLOY_API void mmdeploy_detector_destroy(mm_handle_t handle);
* @brief Same as \ref mmdeploy_detector_create, but allows to control execution context of tasks
* via exec_info
*/
MMDEPLOY_API int mmdeploy_detector_create_v2(mm_model_t model, const char* device_name,
MMDEPLOY_API int mmdeploy_detector_create_v2(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mm_handle_t* handle);
mmdeploy_detector_t* detector);
/**
* @brief Pack detector inputs into mmdeploy_value_t
@ -97,24 +101,24 @@ MMDEPLOY_API int mmdeploy_detector_create_v2(mm_model_t model, const char* devic
* @param[in] mat_count number of images in the batch
* @return the created value
*/
MMDEPLOY_API int mmdeploy_detector_create_input(const mm_mat_t* mats, int mat_count,
MMDEPLOY_API int mmdeploy_detector_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* input);
/**
* @brief Same as \ref mmdeploy_detector_apply, but input and output are packed in \ref
* mmdeploy_value_t.
*/
MMDEPLOY_API int mmdeploy_detector_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
MMDEPLOY_API int mmdeploy_detector_apply_v2(mmdeploy_detector_t detector, mmdeploy_value_t input,
mmdeploy_value_t* output);
/**
* @brief Apply detector asynchronously
* @param[in] handle handle to the detector
* @param[in] detector handle to the detector
* @param[in] input input sender
* @return output sender
*/
MMDEPLOY_API int mmdeploy_detector_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_detector_apply_async(mmdeploy_detector_t detector,
mmdeploy_sender_t input, mmdeploy_sender_t* output);
/**
* @brief Unpack detector output from a mmdeploy_value_t
@ -126,8 +130,8 @@ MMDEPLOY_API int mmdeploy_detector_apply_async(mm_handle_t handle, mmdeploy_send
* mmdeploy_detector_release_result
* @return status of the operation
*/
MMDEPLOY_API int mmdeploy_detector_get_result(mmdeploy_value_t output, mm_detect_t** results,
int** result_count);
MMDEPLOY_API int mmdeploy_detector_get_result(mmdeploy_value_t output,
mmdeploy_detection_t** results, int** result_count);
#ifdef __cplusplus
}

View File

@ -1,6 +1,6 @@
// Copyright (c) OpenMMLab. All rights reserved.
#include "mmdeploy/apis/c/executor.h"
#include "executor.h"
#include "common.h"
#include "common_internal.h"
@ -15,12 +15,13 @@ mmdeploy_scheduler_t CreateScheduler(const char* type, const Value& config = Val
try {
auto creator = Registry<SchedulerType>::Get().GetCreator(type);
if (!creator) {
MMDEPLOY_ERROR("creator for {} not found.", type);
MMDEPLOY_ERROR("Creator for {} not found. Available schedulers: {}", type,
Registry<SchedulerType>::Get().List());
return nullptr;
}
return Cast(new SchedulerType(creator->Create(config)));
} catch (const std::exception& e) {
MMDEPLOY_ERROR("failed to create {}, error: {}", type, e.what());
MMDEPLOY_ERROR("failed to create Scheduler: {} ({}), config: {}", type, e.what(), config);
return nullptr;
}
}
@ -168,14 +169,14 @@ mmdeploy_sender_t mmdeploy_executor_ensure_started(mmdeploy_sender_t input) {
int mmdeploy_executor_start_detached(mmdeploy_sender_t input) {
if (!input) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
StartDetached(Take(input));
return 0;
} catch (...) {
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
mmdeploy_value_t mmdeploy_executor_sync_wait(mmdeploy_sender_t input) {
@ -187,18 +188,18 @@ mmdeploy_value_t mmdeploy_executor_sync_wait(mmdeploy_sender_t input) {
int mmdeploy_executor_sync_wait_v2(mmdeploy_sender_t sender, mmdeploy_value_t* value) {
if (!sender) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
auto result = mmdeploy_executor_sync_wait(sender);
if (!result) {
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
if (value) {
*value = result;
} else {
mmdeploy_value_destroy(result);
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
void mmdeploy_executor_execute(mmdeploy_scheduler_t scheduler, void (*fn)(void*), void* context) {

View File

@ -22,12 +22,13 @@ class AsyncHandle {
config["context"].update({{"device", device_}, {"stream", stream_}});
auto creator = Registry<graph::Node>::Get().GetCreator("Pipeline");
if (!creator) {
MMDEPLOY_ERROR("failed to find Pipeline creator");
MMDEPLOY_ERROR("Failed to find Pipeline creator. Available nodes: {}",
Registry<graph::Node>::Get().List());
throw_exception(eEntryNotFound);
}
pipeline_ = creator->Create(config);
if (!pipeline_) {
MMDEPLOY_ERROR("create pipeline failed");
MMDEPLOY_ERROR("Failed to create pipeline, config: {}", config);
throw_exception(eFail);
}
}

View File

@ -11,30 +11,30 @@
using namespace mmdeploy;
int mmdeploy_model_create_by_path(const char *path, mm_model_t *model) {
int mmdeploy_model_create_by_path(const char* path, mmdeploy_model_t* model) {
try {
auto ptr = std::make_unique<Model>(path);
*model = ptr.release();
return MM_SUCCESS;
} catch (const std::exception &e) {
*model = reinterpret_cast<mmdeploy_model_t>(ptr.release());
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("failed to create model: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
int mmdeploy_model_create(const void *buffer, int size, mm_model_t *model) {
int mmdeploy_model_create(const void* buffer, int size, mmdeploy_model_t* model) {
try {
auto ptr = std::make_unique<Model>(buffer, size);
*model = ptr.release();
return MM_SUCCESS;
} catch (const std::exception &e) {
*model = reinterpret_cast<mmdeploy_model_t>(ptr.release());
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("failed to create model: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
void mmdeploy_model_destroy(mm_model_t model) { delete static_cast<Model *>(model); }
void mmdeploy_model_destroy(mmdeploy_model_t model) { delete reinterpret_cast<Model*>(model); }

View File

@ -14,13 +14,15 @@
extern "C" {
#endif
typedef struct mmdeploy_model* mmdeploy_model_t;
/**
* @brief Create SDK Model instance from given model path
* @param[in] path model path
* @param[out] model sdk model instance that must be destroyed by \ref mmdeploy_model_destroy
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_model_create_by_path(const char* path, mm_model_t* model);
MMDEPLOY_API int mmdeploy_model_create_by_path(const char* path, mmdeploy_model_t* model);
/**
* @brief Create SDK Model instance from memory
@ -29,14 +31,14 @@ MMDEPLOY_API int mmdeploy_model_create_by_path(const char* path, mm_model_t* mod
* @param[out] model sdk model instance that must be destroyed by \ref mmdeploy_model_destroy
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_model_create(const void* buffer, int size, mm_model_t* model);
MMDEPLOY_API int mmdeploy_model_create(const void* buffer, int size, mmdeploy_model_t* model);
/**
* @brief Destroy model instance
* @param[in] model sdk model instance created by \ref mmdeploy_model_create_by_path or \ref
* mmdeploy_model_create
*/
MMDEPLOY_API void mmdeploy_model_destroy(mm_model_t model);
MMDEPLOY_API void mmdeploy_model_destroy(mmdeploy_model_t model);
#ifdef __cplusplus
}

View File

@ -2,12 +2,12 @@
#include "pipeline.h"
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/executor_internal.h"
#include "mmdeploy/apis/c/handle.h"
#include "common_internal.h"
#include "executor_internal.h"
#include "handle.h"
int mmdeploy_pipeline_create(mmdeploy_value_t config, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
mmdeploy_exec_info_t exec_info, mmdeploy_pipeline_t* pipeline) {
try {
auto _config = *Cast(config);
if (exec_info) {
@ -16,57 +16,58 @@ int mmdeploy_pipeline_create(mmdeploy_value_t config, const char* device_name, i
info[p->task_name] = *Cast(p->scheduler);
if (p->next == exec_info) {
MMDEPLOY_ERROR("circle detected in exec_info list.");
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
}
}
auto _handle = std::make_unique<AsyncHandle>(device_name, device_id, std::move(_config));
*handle = _handle.release();
return MM_SUCCESS;
*pipeline = Cast(_handle.release());
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("exception caught: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
int mmdeploy_pipeline_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
int mmdeploy_pipeline_apply_async(mmdeploy_pipeline_t pipeline, mmdeploy_sender_t input,
mmdeploy_sender_t* output) {
if (!handle || !input || !output) {
return MM_E_INVALID_ARG;
if (!pipeline || !input || !output) {
return MMDEPLOY_E_INVALID_ARG;
}
try {
auto h = static_cast<AsyncHandle*>(handle);
auto h = Cast(pipeline);
*output = Take(h->Process(Take(input)));
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("exception caught: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
void mmdeploy_pipeline_destroy(mm_handle_t handle) {
if (handle != nullptr) {
delete static_cast<AsyncHandle*>(handle);
void mmdeploy_pipeline_destroy(mmdeploy_pipeline_t pipeline) {
if (pipeline != nullptr) {
delete Cast(pipeline);
}
}
int mmdeploy_pipeline_apply(mm_handle_t handle, mmdeploy_value_t input, mmdeploy_value_t* output) {
int mmdeploy_pipeline_apply(mmdeploy_pipeline_t pipeline, mmdeploy_value_t input,
mmdeploy_value_t* output) {
auto input_sender = mmdeploy_executor_just(input);
if (!input_sender) {
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
mmdeploy_sender_t output_sender{};
if (auto ec = mmdeploy_pipeline_apply_async(handle, input_sender, &output_sender)) {
if (auto ec = mmdeploy_pipeline_apply_async(pipeline, input_sender, &output_sender)) {
return ec;
}
auto _output = mmdeploy_executor_sync_wait(output_sender);
if (!_output) {
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
*output = _output;
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}

View File

@ -13,44 +13,46 @@ extern "C" {
/******************************************************************************
* Experimental pipeline APIs */
typedef struct mmdeploy_pipeline* mmdeploy_pipeline_t;
/**
* @brief Create pipeline
* @param[in] config config of the pipeline
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[in] exec_info execution options
* @param[out] handle handle of the pipeline
* @param[out] pipeline handle of the pipeline
* @return status of the operation
*/
MMDEPLOY_API int mmdeploy_pipeline_create(mmdeploy_value_t config, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mm_handle_t* handle);
mmdeploy_pipeline_t* pipeline);
/**
* @brief Apply pipeline
* @param[in] handle handle of the pipeline
* @param[in] pipeline handle of the pipeline
* @param[in] input input value
* @param[out] output output value
* @return status of the operation
*/
MMDEPLOY_API int mmdeploy_pipeline_apply(mm_handle_t handle, mmdeploy_value_t input,
MMDEPLOY_API int mmdeploy_pipeline_apply(mmdeploy_pipeline_t pipeline, mmdeploy_value_t input,
mmdeploy_value_t* output);
/**
* Apply pipeline asynchronously
* @param handle handle of the pipeline
* @param pipeline handle of the pipeline
* @param input input sender that will be consumed by the operation
* @param output output sender
* @return status of the operation
*/
MMDEPLOY_API int mmdeploy_pipeline_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_pipeline_apply_async(mmdeploy_pipeline_t pipeline,
mmdeploy_sender_t input, mmdeploy_sender_t* output);
/**
* @brief destroy pipeline
* @param[in] handle
* @param[in] pipeline
*/
MMDEPLOY_API void mmdeploy_pipeline_destroy(mm_handle_t handle);
MMDEPLOY_API void mmdeploy_pipeline_destroy(mmdeploy_pipeline_t pipeline);
#ifdef __cplusplus
}

View File

@ -4,14 +4,14 @@
#include <numeric>
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/handle.h"
#include "mmdeploy/apis/c/pipeline.h"
#include "common_internal.h"
#include "handle.h"
#include "mmdeploy/codebase/mmpose/mmpose.h"
#include "mmdeploy/core/device.h"
#include "mmdeploy/core/graph.h"
#include "mmdeploy/core/mat.h"
#include "mmdeploy/core/utils/formatter.h"
#include "pipeline.h"
using namespace std;
using namespace mmdeploy;
@ -55,56 +55,58 @@ const Value& config_template() {
return v;
}
int mmdeploy_pose_detector_create_impl(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
int mmdeploy_pose_detector_create_impl(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mmdeploy_pose_detector_t* detector) {
auto config = config_template();
config["pipeline"]["tasks"][1]["params"]["model"] = *static_cast<Model*>(model);
config["pipeline"]["tasks"][1]["params"]["model"] = *Cast(model);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info, handle);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info,
(mmdeploy_pipeline_t*)detector);
}
} // namespace
int mmdeploy_pose_detector_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle) {
return mmdeploy_pose_detector_create_impl(model, device_name, device_id, nullptr, handle);
int mmdeploy_pose_detector_create(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_pose_detector_t* detector) {
return mmdeploy_pose_detector_create_impl(model, device_name, device_id, nullptr, detector);
}
int mmdeploy_pose_detector_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle) {
mm_model_t model{};
int device_id, mmdeploy_pose_detector_t* detector) {
mmdeploy_model_t model{};
if (auto ec = mmdeploy_model_create_by_path(model_path, &model)) {
return ec;
}
auto ec = mmdeploy_pose_detector_create_impl(model, device_name, device_id, nullptr, handle);
auto ec = mmdeploy_pose_detector_create_impl(model, device_name, device_id, nullptr, detector);
mmdeploy_model_destroy(model);
return ec;
}
int mmdeploy_pose_detector_apply(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
mm_pose_detect_t** results) {
return mmdeploy_pose_detector_apply_bbox(handle, mats, mat_count, nullptr, nullptr, results);
int mmdeploy_pose_detector_apply(mmdeploy_pose_detector_t detector, const mmdeploy_mat_t* mats,
int mat_count, mmdeploy_pose_detection_t** results) {
return mmdeploy_pose_detector_apply_bbox(detector, mats, mat_count, nullptr, nullptr, results);
}
int mmdeploy_pose_detector_apply_bbox(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
const mm_rect_t* bboxes, const int* bbox_count,
mm_pose_detect_t** results) {
int mmdeploy_pose_detector_apply_bbox(mmdeploy_pose_detector_t detector, const mmdeploy_mat_t* mats,
int mat_count, const mmdeploy_rect_t* bboxes,
const int* bbox_count, mmdeploy_pose_detection_t** results) {
wrapped<mmdeploy_value_t> input;
if (auto ec =
mmdeploy_pose_detector_create_input(mats, mat_count, bboxes, bbox_count, input.ptr())) {
return ec;
}
wrapped<mmdeploy_value_t> output;
if (auto ec = mmdeploy_pose_detector_apply_v2(handle, input, output.ptr())) {
if (auto ec = mmdeploy_pose_detector_apply_v2(detector, input, output.ptr())) {
return ec;
}
if (auto ec = mmdeploy_pose_detector_get_result(output, results)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
void mmdeploy_pose_detector_release_result(mm_pose_detect_t* results, int count) {
void mmdeploy_pose_detector_release_result(mmdeploy_pose_detection_t* results, int count) {
if (results == nullptr) {
return;
}
@ -115,17 +117,18 @@ void mmdeploy_pose_detector_release_result(mm_pose_detect_t* results, int count)
delete[] results;
}
void mmdeploy_pose_detector_destroy(mm_handle_t handle) {
delete static_cast<AsyncHandle*>(handle);
void mmdeploy_pose_detector_destroy(mmdeploy_pose_detector_t detector) {
mmdeploy_pipeline_destroy((mmdeploy_pipeline_t)detector);
}
int mmdeploy_pose_detector_create_v2(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
return mmdeploy_pose_detector_create_impl(model, device_name, device_id, exec_info, handle);
int mmdeploy_pose_detector_create_v2(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info,
mmdeploy_pose_detector_t* detector) {
return mmdeploy_pose_detector_create_impl(model, device_name, device_id, exec_info, detector);
}
int mmdeploy_pose_detector_create_input(const mm_mat_t* mats, int mat_count,
const mm_rect_t* bboxes, const int* bbox_count,
int mmdeploy_pose_detector_create_input(const mmdeploy_mat_t* mats, int mat_count,
const mmdeploy_rect_t* bboxes, const int* bbox_count,
mmdeploy_value_t* value) {
try {
Value input{Value::kArray};
@ -161,28 +164,29 @@ int mmdeploy_pose_detector_create_input(const mm_mat_t* mats, int mat_count,
input.front().push_back(img_with_boxes);
}
*value = Take(std::move(input));
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("unhandled exception: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
int mmdeploy_pose_detector_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
int mmdeploy_pose_detector_apply_v2(mmdeploy_pose_detector_t detector, mmdeploy_value_t input,
mmdeploy_value_t* output) {
return mmdeploy_pipeline_apply(handle, input, output);
return mmdeploy_pipeline_apply((mmdeploy_pipeline_t)detector, input, output);
}
int mmdeploy_pose_detector_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
int mmdeploy_pose_detector_apply_async(mmdeploy_pose_detector_t detector, mmdeploy_sender_t input,
mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async(handle, input, output);
return mmdeploy_pipeline_apply_async((mmdeploy_pipeline_t)detector, input, output);
}
int mmdeploy_pose_detector_get_result(mmdeploy_value_t output, mm_pose_detect_t** results) {
int mmdeploy_pose_detector_get_result(mmdeploy_value_t output,
mmdeploy_pose_detection_t** results) {
if (!output || !results) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
Value& value = Cast(output)->front();
@ -195,12 +199,12 @@ int mmdeploy_pose_detector_get_result(mmdeploy_value_t output, mm_pose_detect_t*
result_count += v.size();
}
auto deleter = [&](mm_pose_detect_t* p) {
auto deleter = [&](mmdeploy_pose_detection_t* p) {
mmdeploy_pose_detector_release_result(p, static_cast<int>(result_count));
};
std::unique_ptr<mm_pose_detect_t[], decltype(deleter)> _results(
new mm_pose_detect_t[result_count]{}, deleter);
std::unique_ptr<mmdeploy_pose_detection_t[], decltype(deleter)> _results(
new mmdeploy_pose_detection_t[result_count]{}, deleter);
size_t result_idx = 0;
for (const auto& img_result : pose_outputs) {
@ -208,7 +212,7 @@ int mmdeploy_pose_detector_get_result(mmdeploy_value_t output, mm_pose_detect_t*
auto& res = _results[result_idx++];
auto size = box_result.key_points.size();
res.point = new mm_pointf_t[size];
res.point = new mmdeploy_point_t[size];
res.score = new float[size];
res.length = static_cast<int>(size);
@ -220,11 +224,11 @@ int mmdeploy_pose_detector_get_result(mmdeploy_value_t output, mm_pose_detect_t*
}
}
*results = _results.release();
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("unhandled exception: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}

View File

@ -10,16 +10,19 @@
#include "common.h"
#include "executor.h"
#include "model.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct mm_pose_detect_t {
mm_pointf_t* point; ///< keypoint
float* score; ///< keypoint score
int length; ///< number of keypoint
} mm_pose_detect_t;
typedef struct mmdeploy_pose_detection_t {
mmdeploy_point_t* point; ///< keypoint
float* score; ///< keypoint score
int length; ///< number of keypoint
} mmdeploy_pose_detection_t;
typedef struct mmdeploy_pose_detector* mmdeploy_pose_detector_t;
/**
* @brief Create a pose detector instance
@ -27,29 +30,29 @@ typedef struct mm_pose_detect_t {
* \ref mmdeploy_model_create_by_path or \ref mmdeploy_model_create in \ref model.h
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle handle of the created pose detector, which must be destroyed
* @param[out] detector handle of the created pose detector, which must be destroyed
* by \ref mmdeploy_pose_detector_destroy
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_pose_detector_create(mm_model_t model, const char* device_name,
int device_id, mm_handle_t* handle);
MMDEPLOY_API int mmdeploy_pose_detector_create(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_pose_detector_t* detector);
/**
* @brief Create a pose detector instance
* @param[in] model_path path to pose detection model
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle handle of the created pose detector, which must be destroyed
* @param[out] detector handle of the created pose detector, which must be destroyed
* by \ref mmdeploy_pose_detector_destroy
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_pose_detector_create_by_path(const char* model_path,
const char* device_name, int device_id,
mm_handle_t* handle);
mmdeploy_pose_detector_t* detector);
/**
* @brief Apply pose detector to a batch of images with full image roi
* @param[in] handle pose detector's handle created by \ref
* @param[in] detector pose detector's handle created by \ref
* mmdeploy_pose_detector_create_by_path
* @param[in] images a batch of images
* @param[in] count number of images in the batch
@ -57,12 +60,13 @@ MMDEPLOY_API int mmdeploy_pose_detector_create_by_path(const char* model_path,
* by \ref mmdeploy_pose_detector_release_result
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_pose_detector_apply(mm_handle_t handle, const mm_mat_t* mats,
int mat_count, mm_pose_detect_t** results);
MMDEPLOY_API int mmdeploy_pose_detector_apply(mmdeploy_pose_detector_t detector,
const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_pose_detection_t** results);
/**
* @brief Apply pose detector to a batch of images supplied with bboxes(roi)
* @param[in] handle pose detector's handle created by \ref
* @param[in] detector pose detector's handle created by \ref
* mmdeploy_pose_detector_create_by_path
* @param[in] images a batch of images
* @param[in] image_count number of images in the batch
@ -72,44 +76,48 @@ MMDEPLOY_API int mmdeploy_pose_detector_apply(mm_handle_t handle, const mm_mat_t
* bboxes, must be release by \ref mmdeploy_pose_detector_release_result
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_pose_detector_apply_bbox(mm_handle_t handle, const mm_mat_t* mats,
int mat_count, const mm_rect_t* bboxes,
MMDEPLOY_API int mmdeploy_pose_detector_apply_bbox(mmdeploy_pose_detector_t detector,
const mmdeploy_mat_t* mats, int mat_count,
const mmdeploy_rect_t* bboxes,
const int* bbox_count,
mm_pose_detect_t** results);
mmdeploy_pose_detection_t** results);
/** @brief Release result buffer returned by \ref mmdeploy_pose_detector_apply or \ref
* mmdeploy_pose_detector_apply_bbox
* @param[in] results result buffer by pose detector
* @param[in] count length of \p result
*/
MMDEPLOY_API void mmdeploy_pose_detector_release_result(mm_pose_detect_t* results, int count);
MMDEPLOY_API void mmdeploy_pose_detector_release_result(mmdeploy_pose_detection_t* results,
int count);
/**
* @brief destroy pose_detector
* @param[in] handle handle of pose_detector created by \ref
* @param[in] detector handle of pose_detector created by \ref
* mmdeploy_pose_detector_create_by_path or \ref mmdeploy_pose_detector_create
*/
MMDEPLOY_API void mmdeploy_pose_detector_destroy(mm_handle_t handle);
MMDEPLOY_API void mmdeploy_pose_detector_destroy(mmdeploy_pose_detector_t detector);
/******************************************************************************
* Experimental asynchronous APIs */
MMDEPLOY_API int mmdeploy_pose_detector_create_v2(mm_model_t model, const char* device_name,
MMDEPLOY_API int mmdeploy_pose_detector_create_v2(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mm_handle_t* handle);
mmdeploy_pose_detector_t* detector);
MMDEPLOY_API int mmdeploy_pose_detector_create_input(const mm_mat_t* mats, int mat_count,
const mm_rect_t* bboxes, const int* bbox_count,
MMDEPLOY_API int mmdeploy_pose_detector_create_input(const mmdeploy_mat_t* mats, int mat_count,
const mmdeploy_rect_t* bboxes,
const int* bbox_count,
mmdeploy_value_t* value);
MMDEPLOY_API int mmdeploy_pose_detector_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
mmdeploy_value_t* output);
MMDEPLOY_API int mmdeploy_pose_detector_apply_v2(mmdeploy_pose_detector_t detector,
mmdeploy_value_t input, mmdeploy_value_t* output);
MMDEPLOY_API int mmdeploy_pose_detector_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
MMDEPLOY_API int mmdeploy_pose_detector_apply_async(mmdeploy_pose_detector_t detector,
mmdeploy_sender_t input,
mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_pose_detector_get_result(mmdeploy_value_t output,
mm_pose_detect_t** results);
mmdeploy_pose_detection_t** results);
#ifdef __cplusplus
}

View File

@ -2,14 +2,14 @@
#include "restorer.h"
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/executor_internal.h"
#include "mmdeploy/apis/c/handle.h"
#include "mmdeploy/apis/c/pipeline.h"
#include "common_internal.h"
#include "executor_internal.h"
#include "handle.h"
#include "mmdeploy/codebase/mmedit/mmedit.h"
#include "mmdeploy/core/device.h"
#include "mmdeploy/core/graph.h"
#include "mmdeploy/core/utils/formatter.h"
#include "pipeline.h"
using namespace mmdeploy;
@ -40,79 +40,83 @@ const Value& config_template() {
return v;
}
int mmdeploy_restorer_create_impl(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
int mmdeploy_restorer_create_impl(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mmdeploy_restorer_t* restorer) {
auto config = config_template();
config["pipeline"]["tasks"][0]["params"]["model"] = *static_cast<Model*>(model);
config["pipeline"]["tasks"][0]["params"]["model"] = *Cast(model);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info, handle);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info,
(mmdeploy_pipeline_t*)restorer);
}
} // namespace
int mmdeploy_restorer_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle) {
return mmdeploy_restorer_create_impl(model, device_name, device_id, nullptr, handle);
int mmdeploy_restorer_create(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_restorer_t* restorer) {
return mmdeploy_restorer_create_impl(model, device_name, device_id, nullptr, restorer);
}
int mmdeploy_restorer_create_by_path(const char* model_path, const char* device_name, int device_id,
mm_handle_t* handle) {
mm_model_t model{};
mmdeploy_restorer_t* restorer) {
mmdeploy_model_t model{};
if (auto ec = mmdeploy_model_create_by_path(model_path, &model)) {
return ec;
}
auto ec = mmdeploy_restorer_create_impl(model, device_name, device_id, nullptr, handle);
auto ec = mmdeploy_restorer_create_impl(model, device_name, device_id, nullptr, restorer);
mmdeploy_model_destroy(model);
return ec;
}
int mmdeploy_restorer_apply(mm_handle_t handle, const mm_mat_t* images, int count,
mm_mat_t** results) {
int mmdeploy_restorer_apply(mmdeploy_restorer_t restorer, const mmdeploy_mat_t* images, int count,
mmdeploy_mat_t** results) {
wrapped<mmdeploy_value_t> input;
if (auto ec = mmdeploy_restorer_create_input(images, count, input.ptr())) {
return ec;
}
wrapped<mmdeploy_value_t> output;
if (auto ec = mmdeploy_restorer_apply_v2(handle, input, output.ptr())) {
if (auto ec = mmdeploy_restorer_apply_v2(restorer, input, output.ptr())) {
return ec;
}
if (auto ec = mmdeploy_restorer_get_result(output, results)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
void mmdeploy_restorer_release_result(mm_mat_t* results, int count) {
void mmdeploy_restorer_release_result(mmdeploy_mat_t* results, int count) {
for (int i = 0; i < count; ++i) {
delete[] results[i].data;
}
delete[] results;
}
void mmdeploy_restorer_destroy(mm_handle_t handle) { delete static_cast<AsyncHandle*>(handle); }
int mmdeploy_restorer_create_v2(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
return mmdeploy_restorer_create_impl(model, device_name, device_id, exec_info, handle);
void mmdeploy_restorer_destroy(mmdeploy_restorer_t restorer) {
mmdeploy_pipeline_destroy((mmdeploy_pipeline_t)restorer);
}
int mmdeploy_restorer_create_input(const mm_mat_t* mats, int mat_count, mmdeploy_value_t* value) {
int mmdeploy_restorer_create_v2(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mmdeploy_restorer_t* restorer) {
return mmdeploy_restorer_create_impl(model, device_name, device_id, exec_info, restorer);
}
int mmdeploy_restorer_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* value) {
return mmdeploy_common_create_input(mats, mat_count, value);
}
int mmdeploy_restorer_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
int mmdeploy_restorer_apply_v2(mmdeploy_restorer_t restorer, mmdeploy_value_t input,
mmdeploy_value_t* output) {
return mmdeploy_pipeline_apply(handle, input, output);
return mmdeploy_pipeline_apply((mmdeploy_pipeline_t)restorer, input, output);
}
int mmdeploy_restorer_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
int mmdeploy_restorer_apply_async(mmdeploy_restorer_t restorer, mmdeploy_sender_t input,
mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async(handle, input, output);
return mmdeploy_pipeline_apply_async((mmdeploy_pipeline_t)restorer, input, output);
}
int mmdeploy_restorer_get_result(mmdeploy_value_t output, mm_mat_t** results) {
int mmdeploy_restorer_get_result(mmdeploy_value_t output, mmdeploy_mat_t** results) {
if (!output || !results) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
const Value& value = Cast(output)->front();
@ -121,29 +125,30 @@ int mmdeploy_restorer_get_result(mmdeploy_value_t output, mm_mat_t** results) {
auto count = restorer_output.size();
auto deleter = [&](mm_mat_t* p) {
auto deleter = [&](mmdeploy_mat_t* p) {
mmdeploy_restorer_release_result(p, static_cast<int>(count));
};
std::unique_ptr<mm_mat_t[], decltype(deleter)> _results(new mm_mat_t[count]{}, deleter);
std::unique_ptr<mmdeploy_mat_t[], decltype(deleter)> _results(new mmdeploy_mat_t[count]{},
deleter);
for (int i = 0; i < count; ++i) {
auto upscale = restorer_output[i];
auto& res = _results[i];
res.data = new uint8_t[upscale.byte_size()];
memcpy(res.data, upscale.data<uint8_t>(), upscale.byte_size());
res.format = (mm_pixel_format_t)upscale.pixel_format();
res.format = (mmdeploy_pixel_format_t)upscale.pixel_format();
res.height = upscale.height();
res.width = upscale.width();
res.channel = upscale.channel();
res.type = (mm_data_type_t)upscale.type();
res.type = (mmdeploy_data_type_t)upscale.type();
}
*results = _results.release();
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("unhandled exception: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}

View File

@ -10,77 +10,80 @@
#include "common.h"
#include "executor.h"
#include "model.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct mmdeploy_restorer* mmdeploy_restorer_t;
/**
* @brief Create a restorer instance
* @param[in] model an instance of image restoration model created by
* \ref mmdeploy_model_create_by_path or \ref mmdeploy_model_create in \ref model.h
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle handle of the created restorer, which must be destroyed
* @param[out] restorer handle of the created restorer, which must be destroyed
* by \ref mmdeploy_restorer_destroy
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_restorer_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle);
MMDEPLOY_API int mmdeploy_restorer_create(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_restorer_t* restorer);
/**
* @brief Create a restorer instance
* @param[in] model_path path to image restoration model
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle handle of the created restorer, which must be destroyed
* @param[out] restorer handle of the created restorer, which must be destroyed
* by \ref mmdeploy_restorer_destroy
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_restorer_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle);
int device_id, mmdeploy_restorer_t* restorer);
/**
* @brief Apply restorer to a batch of images
* @param[in] handle restorer's handle created by \ref mmdeploy_restorer_create_by_path
* @param[in] restorer restorer's handle created by \ref mmdeploy_restorer_create_by_path
* @param[in] images a batch of images
* @param[in] count number of images in the batch
* @param[out] results a linear buffer contains the restored images, must be release
* by \ref mmdeploy_restorer_release_result
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_restorer_apply(mm_handle_t handle, const mm_mat_t* images, int count,
mm_mat_t** results);
MMDEPLOY_API int mmdeploy_restorer_apply(mmdeploy_restorer_t restorer, const mmdeploy_mat_t* images,
int count, mmdeploy_mat_t** results);
/** @brief Release result buffer returned by \ref mmdeploy_restorer_apply
* @param[in] results result buffer by restorer
* @param[in] count length of \p result
*/
MMDEPLOY_API void mmdeploy_restorer_release_result(mm_mat_t* results, int count);
MMDEPLOY_API void mmdeploy_restorer_release_result(mmdeploy_mat_t* results, int count);
/**
* @brief destroy restorer
* @param[in] handle handle of restorer created by \ref mmdeploy_restorer_create_by_path
* @param[in] restorer handle of restorer created by \ref mmdeploy_restorer_create_by_path
*/
MMDEPLOY_API void mmdeploy_restorer_destroy(mm_handle_t handle);
MMDEPLOY_API void mmdeploy_restorer_destroy(mmdeploy_restorer_t restorer);
/******************************************************************************
* Experimental asynchronous APIs */
MMDEPLOY_API int mmdeploy_restorer_create_v2(mm_model_t model, const char* device_name,
MMDEPLOY_API int mmdeploy_restorer_create_v2(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mm_handle_t* handle);
mmdeploy_restorer_t* restorer);
MMDEPLOY_API int mmdeploy_restorer_create_input(const mm_mat_t* mats, int mat_count,
MMDEPLOY_API int mmdeploy_restorer_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* value);
MMDEPLOY_API int mmdeploy_restorer_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
MMDEPLOY_API int mmdeploy_restorer_apply_v2(mmdeploy_restorer_t restorer, mmdeploy_value_t input,
mmdeploy_value_t* output);
MMDEPLOY_API int mmdeploy_restorer_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_restorer_apply_async(mmdeploy_restorer_t restorer,
mmdeploy_sender_t input, mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_restorer_get_result(mmdeploy_value_t output, mm_mat_t** results);
MMDEPLOY_API int mmdeploy_restorer_get_result(mmdeploy_value_t output, mmdeploy_mat_t** results);
#ifdef __cplusplus
}

View File

@ -4,13 +4,13 @@
#include <numeric>
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/handle.h"
#include "mmdeploy/apis/c/pipeline.h"
#include "common_internal.h"
#include "handle.h"
#include "mmdeploy/codebase/mmrotate/mmrotate.h"
#include "mmdeploy/core/graph.h"
#include "mmdeploy/core/mat.h"
#include "mmdeploy/core/utils/formatter.h"
#include "pipeline.h"
using namespace std;
using namespace mmdeploy;
@ -42,89 +42,88 @@ Value& config_template() {
return v;
}
template <class ModelType>
int mmdeploy_rotated_detector_create_impl(ModelType&& m, const char* device_name, int device_id,
mm_handle_t* handle) {
try {
auto value = config_template();
value["pipeline"]["tasks"][0]["params"]["model"] = std::forward<ModelType>(m);
int mmdeploy_rotated_detector_create_impl(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mmdeploy_rotated_detector_t* detector) {
auto config = config_template();
config["pipeline"]["tasks"][0]["params"]["model"] = *Cast(model);
auto pose_estimator = std::make_unique<AsyncHandle>(device_name, device_id, std::move(value));
*handle = pose_estimator.release();
return MM_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("exception caught: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info,
(mmdeploy_pipeline_t*)detector);
}
} // namespace
int mmdeploy_rotated_detector_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle) {
return mmdeploy_rotated_detector_create_impl(*static_cast<Model*>(model), device_name, device_id,
handle);
int mmdeploy_rotated_detector_create(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_rotated_detector_t* detector) {
return mmdeploy_rotated_detector_create_impl(model, device_name, device_id, nullptr, detector);
}
int mmdeploy_rotated_detector_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle) {
return mmdeploy_rotated_detector_create_impl(model_path, device_name, device_id, handle);
int device_id, mmdeploy_rotated_detector_t* detector) {
mmdeploy_model_t model{};
if (auto ec = mmdeploy_model_create_by_path(model_path, &model)) {
return ec;
}
auto ec = mmdeploy_rotated_detector_create_impl(model, device_name, device_id, nullptr, detector);
mmdeploy_model_destroy(model);
return ec;
}
int mmdeploy_rotated_detector_apply(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
mm_rotated_detect_t** results, int** result_count) {
int mmdeploy_rotated_detector_apply(mmdeploy_rotated_detector_t detector,
const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_rotated_detection_t** results, int** result_count) {
wrapped<mmdeploy_value_t> input;
if (auto ec = mmdeploy_rotated_detector_create_input(mats, mat_count, input.ptr())) {
return ec;
}
wrapped<mmdeploy_value_t> output;
if (auto ec = mmdeploy_rotated_detector_apply_v2(handle, input, output.ptr())) {
if (auto ec = mmdeploy_rotated_detector_apply_v2(detector, input, output.ptr())) {
return ec;
}
if (auto ec = mmdeploy_rotated_detector_get_result(output, results, result_count)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
void mmdeploy_rotated_detector_release_result(mm_rotated_detect_t* results,
void mmdeploy_rotated_detector_release_result(mmdeploy_rotated_detection_t* results,
const int* result_count) {
delete[] results;
delete[] result_count;
}
void mmdeploy_rotated_detector_destroy(mm_handle_t handle) {
delete static_cast<AsyncHandle*>(handle);
void mmdeploy_rotated_detector_destroy(mmdeploy_rotated_detector_t detector) {
mmdeploy_pipeline_destroy((mmdeploy_pipeline_t)detector);
}
int mmdeploy_rotated_detector_create_v2(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
return 0;
int mmdeploy_rotated_detector_create_v2(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mmdeploy_rotated_detector_t* detector) {
return mmdeploy_rotated_detector_create_impl(model, device_name, device_id, exec_info, detector);
}
int mmdeploy_rotated_detector_create_input(const mm_mat_t* mats, int mat_count,
int mmdeploy_rotated_detector_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* input) {
return mmdeploy_common_create_input(mats, mat_count, input);
}
int mmdeploy_rotated_detector_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
int mmdeploy_rotated_detector_apply_v2(mmdeploy_rotated_detector_t detector, mmdeploy_value_t input,
mmdeploy_value_t* output) {
return mmdeploy_pipeline_apply(handle, input, output);
return mmdeploy_pipeline_apply((mmdeploy_pipeline_t)detector, input, output);
}
int mmdeploy_rotated_detector_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async(handle, input, output);
int mmdeploy_rotated_detector_apply_async(mmdeploy_rotated_detector_t detector,
mmdeploy_sender_t input, mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async((mmdeploy_pipeline_t)detector, input, output);
}
int mmdeploy_rotated_detector_get_result(mmdeploy_value_t output, mm_rotated_detect_t** results,
int mmdeploy_rotated_detector_get_result(mmdeploy_value_t output,
mmdeploy_rotated_detection_t** results,
int** result_count) {
if (!output || !results || !result_count) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
@ -142,7 +141,8 @@ int mmdeploy_rotated_detector_get_result(mmdeploy_value_t output, mm_rotated_det
std::unique_ptr<int[]> result_count_data(new int[_result_count.size()]{});
std::copy(_result_count.begin(), _result_count.end(), result_count_data.get());
std::unique_ptr<mm_rotated_detect_t[]> result_data(new mm_rotated_detect_t[total]{});
std::unique_ptr<mmdeploy_rotated_detection_t[]> result_data(
new mmdeploy_rotated_detection_t[total]{});
auto result_ptr = result_data.get();
for (const auto& det_output : detector_outputs) {
@ -160,12 +160,12 @@ int mmdeploy_rotated_detector_get_result(mmdeploy_value_t output, mm_rotated_det
*result_count = result_count_data.release();
*results = result_data.release();
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("unhandled exception: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}

View File

@ -10,16 +10,19 @@
#include "common.h"
#include "executor.h"
#include "model.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct mm_rotated_detect_t {
typedef struct mmdeploy_rotated_detection_t {
int label_id;
float score;
float rbbox[5]; // cx, cy, w, h, angle
} mm_rotated_detect_t;
} mmdeploy_rotated_detection_t;
typedef struct mmdeploy_rotated_detector* mmdeploy_rotated_detector_t;
/**
* @brief Create rotated detector's handle
@ -27,27 +30,28 @@ typedef struct mm_rotated_detect_t {
* \ref mmdeploy_model_create_by_path or \ref mmdeploy_model_create in \ref model.h
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle instance of a rotated detector
* @param[out] detector instance of a rotated detector
* @return status of creating rotated detector's handle
*/
MMDEPLOY_API int mmdeploy_rotated_detector_create(mm_model_t model, const char* device_name,
int device_id, mm_handle_t* handle);
MMDEPLOY_API int mmdeploy_rotated_detector_create(mmdeploy_model_t model, const char* device_name,
int device_id,
mmdeploy_rotated_detector_t* detector);
/**
* @brief Create rotated detector's handle
* @param[in] model_path path of mmrotate sdk model exported by mmdeploy model converter
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle instance of a rotated detector
* @param[out] detector instance of a rotated detector
* @return status of creating rotated detector's handle
*/
MMDEPLOY_API int mmdeploy_rotated_detector_create_by_path(const char* model_path,
const char* device_name, int device_id,
mm_handle_t* handle);
mmdeploy_rotated_detector_t* detector);
/**
* @brief Apply rotated detector to batch images and get their inference results
* @param[in] handle rotated detector's handle created by \ref
* @param[in] detector rotated detector's handle created by \ref
* mmdeploy_rotated_detector_create_by_path
* @param[in] mats a batch of images
* @param[in] mat_count number of images in the batch
@ -58,23 +62,24 @@ MMDEPLOY_API int mmdeploy_rotated_detector_create_by_path(const char* model_path
* mmdeploy_rotated_detector_release_result
* @return status of inference
*/
MMDEPLOY_API int mmdeploy_rotated_detector_apply(mm_handle_t handle, const mm_mat_t* mats,
int mat_count, mm_rotated_detect_t** results,
MMDEPLOY_API int mmdeploy_rotated_detector_apply(mmdeploy_rotated_detector_t detector,
const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_rotated_detection_t** results,
int** result_count);
/** @brief Release the inference result buffer created by \ref mmdeploy_rotated_detector_apply
* @param[in] results rotated detection results buffer
* @param[in] result_count \p results size buffer
*/
MMDEPLOY_API void mmdeploy_rotated_detector_release_result(mm_rotated_detect_t* results,
MMDEPLOY_API void mmdeploy_rotated_detector_release_result(mmdeploy_rotated_detection_t* results,
const int* result_count);
/**
* @brief Destroy rotated detector's handle
* @param[in] handle rotated detector's handle created by \ref
* @param[in] detector rotated detector's handle created by \ref
* mmdeploy_rotated_detector_create_by_path or by \ref mmdeploy_rotated_detector_create
*/
MMDEPLOY_API void mmdeploy_rotated_detector_destroy(mm_handle_t handle);
MMDEPLOY_API void mmdeploy_rotated_detector_destroy(mmdeploy_rotated_detector_t detector);
/******************************************************************************
* Experimental asynchronous APIs */
@ -83,9 +88,10 @@ MMDEPLOY_API void mmdeploy_rotated_detector_destroy(mm_handle_t handle);
* @brief Same as \ref mmdeploy_detector_create, but allows to control execution context of tasks
* via exec_info
*/
MMDEPLOY_API int mmdeploy_rotated_detector_create_v2(mm_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mm_handle_t* handle);
MMDEPLOY_API int mmdeploy_rotated_detector_create_v2(mmdeploy_model_t model,
const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info,
mmdeploy_rotated_detector_t* detector);
/**
* @brief Pack rotated detector inputs into mmdeploy_value_t
@ -93,23 +99,25 @@ MMDEPLOY_API int mmdeploy_rotated_detector_create_v2(mm_model_t model, const cha
* @param[in] mat_count number of images in the batch
* @return the created value
*/
MMDEPLOY_API int mmdeploy_rotated_detector_create_input(const mm_mat_t* mats, int mat_count,
MMDEPLOY_API int mmdeploy_rotated_detector_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* input);
/**
* @brief Same as \ref mmdeploy_rotated_detector_apply, but input and output are packed in \ref
* mmdeploy_value_t.
*/
MMDEPLOY_API int mmdeploy_rotated_detector_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
MMDEPLOY_API int mmdeploy_rotated_detector_apply_v2(mmdeploy_rotated_detector_t detector,
mmdeploy_value_t input,
mmdeploy_value_t* output);
/**
* @brief Apply rotated detector asynchronously
* @param[in] handle handle to the detector
* @param[in] detector handle to the detector
* @param[in] input input sender
* @return output sender
*/
MMDEPLOY_API int mmdeploy_rotated_detector_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
MMDEPLOY_API int mmdeploy_rotated_detector_apply_async(mmdeploy_rotated_detector_t detector,
mmdeploy_sender_t input,
mmdeploy_sender_t* output);
/**
@ -123,7 +131,7 @@ MMDEPLOY_API int mmdeploy_rotated_detector_apply_async(mm_handle_t handle, mmdep
* @return status of the operation
*/
MMDEPLOY_API int mmdeploy_rotated_detector_get_result(mmdeploy_value_t output,
mm_rotated_detect_t** results,
mmdeploy_rotated_detection_t** results,
int** result_count);
#ifdef __cplusplus

View File

@ -1,16 +1,16 @@
// Copyright (c) OpenMMLab. All rights reserved.
#include "mmdeploy/apis/c/segmentor.h"
#include "segmentor.h"
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/handle.h"
#include "mmdeploy/apis/c/pipeline.h"
#include "common_internal.h"
#include "handle.h"
#include "mmdeploy/codebase/mmseg/mmseg.h"
#include "mmdeploy/core/device.h"
#include "mmdeploy/core/graph.h"
#include "mmdeploy/core/mat.h"
#include "mmdeploy/core/tensor.h"
#include "mmdeploy/core/utils/formatter.h"
#include "pipeline.h"
using namespace std;
using namespace mmdeploy;
@ -42,49 +42,51 @@ Value& config_template() {
return v;
}
int mmdeploy_segmentor_create_impl(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
int mmdeploy_segmentor_create_impl(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info,
mmdeploy_segmentor_t* segmentor) {
auto config = config_template();
config["pipeline"]["tasks"][0]["params"]["model"] = *static_cast<Model*>(model);
config["pipeline"]["tasks"][0]["params"]["model"] = *Cast(model);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info, handle);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info,
(mmdeploy_pipeline_t*)segmentor);
}
} // namespace
int mmdeploy_segmentor_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle) {
return mmdeploy_segmentor_create_impl(model, device_name, device_id, nullptr, handle);
int mmdeploy_segmentor_create(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_segmentor_t* segmentor) {
return mmdeploy_segmentor_create_impl(model, device_name, device_id, nullptr, segmentor);
}
int mmdeploy_segmentor_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle) {
mm_model_t model{};
int device_id, mmdeploy_segmentor_t* segmentor) {
mmdeploy_model_t model{};
if (auto ec = mmdeploy_model_create_by_path(model_path, &model)) {
return ec;
}
auto ec = mmdeploy_segmentor_create_impl(model, device_name, device_id, nullptr, handle);
auto ec = mmdeploy_segmentor_create_impl(model, device_name, device_id, nullptr, segmentor);
mmdeploy_model_destroy(model);
return ec;
}
int mmdeploy_segmentor_apply(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
mm_segment_t** results) {
int mmdeploy_segmentor_apply(mmdeploy_segmentor_t segmentor, const mmdeploy_mat_t* mats,
int mat_count, mmdeploy_segmentation_t** results) {
wrapped<mmdeploy_value_t> input;
if (auto ec = mmdeploy_segmentor_create_input(mats, mat_count, input.ptr())) {
return ec;
}
wrapped<mmdeploy_value_t> output;
if (auto ec = mmdeploy_segmentor_apply_v2(handle, input, output.ptr())) {
if (auto ec = mmdeploy_segmentor_apply_v2(segmentor, input, output.ptr())) {
return ec;
}
if (auto ec = mmdeploy_segmentor_get_result(output, results)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
void mmdeploy_segmentor_release_result(mm_segment_t* results, int count) {
void mmdeploy_segmentor_release_result(mmdeploy_segmentation_t* results, int count) {
if (results == nullptr) {
return;
}
@ -95,43 +97,41 @@ void mmdeploy_segmentor_release_result(mm_segment_t* results, int count) {
delete[] results;
}
void mmdeploy_segmentor_destroy(mm_handle_t handle) {
if (handle != nullptr) {
auto segmentor = static_cast<AsyncHandle*>(handle);
delete segmentor;
}
void mmdeploy_segmentor_destroy(mmdeploy_segmentor_t segmentor) {
mmdeploy_pipeline_destroy((mmdeploy_pipeline_t)segmentor);
}
int mmdeploy_segmentor_create_v2(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
return mmdeploy_segmentor_create_impl(model, device_name, device_id, exec_info, handle);
int mmdeploy_segmentor_create_v2(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mmdeploy_segmentor_t* segmentor) {
return mmdeploy_segmentor_create_impl(model, device_name, device_id, exec_info, segmentor);
}
int mmdeploy_segmentor_create_input(const mm_mat_t* mats, int mat_count, mmdeploy_value_t* value) {
int mmdeploy_segmentor_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* value) {
return mmdeploy_common_create_input(mats, mat_count, value);
}
int mmdeploy_segmentor_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
int mmdeploy_segmentor_apply_v2(mmdeploy_segmentor_t segmentor, mmdeploy_value_t input,
mmdeploy_value_t* output) {
return mmdeploy_pipeline_apply(handle, input, output);
return mmdeploy_pipeline_apply((mmdeploy_pipeline_t)segmentor, input, output);
}
int mmdeploy_segmentor_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
int mmdeploy_segmentor_apply_async(mmdeploy_segmentor_t segmentor, mmdeploy_sender_t input,
mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async(handle, input, output);
return mmdeploy_pipeline_apply_async((mmdeploy_pipeline_t)segmentor, input, output);
}
int mmdeploy_segmentor_get_result(mmdeploy_value_t output, mm_segment_t** results) {
int mmdeploy_segmentor_get_result(mmdeploy_value_t output, mmdeploy_segmentation_t** results) {
try {
const auto& value = Cast(output)->front();
size_t image_count = value.size();
auto deleter = [&](mm_segment_t* p) {
auto deleter = [&](mmdeploy_segmentation_t* p) {
mmdeploy_segmentor_release_result(p, static_cast<int>(image_count));
};
unique_ptr<mm_segment_t[], decltype(deleter)> _results(new mm_segment_t[image_count]{},
deleter);
unique_ptr<mmdeploy_segmentation_t[], decltype(deleter)> _results(
new mmdeploy_segmentation_t[image_count]{}, deleter);
auto results_ptr = _results.get();
for (auto i = 0; i < image_count; ++i, ++results_ptr) {
auto& output_item = value[i];
@ -146,12 +146,12 @@ int mmdeploy_segmentor_get_result(mmdeploy_value_t output, mm_segment_t** result
std::copy_n(mask.data<int>(), mask_size, results_ptr->mask);
}
*results = _results.release();
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("exception caught: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}

View File

@ -10,18 +10,21 @@
#include "common.h"
#include "executor.h"
#include "model.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct mm_segment_t {
typedef struct mmdeploy_segmentation_t {
int height; ///< height of \p mask that equals to the input image's height
int width; ///< width of \p mask that equals to the input image's width
int classes; ///< the number of labels in \p mask
int* mask; ///< segmentation mask of the input image, in which mask[i * width + j] indicates
///< the label id of pixel at (i, j)
} mm_segment_t;
} mmdeploy_segmentation_t;
typedef struct mmdeploy_segmentor* mmdeploy_segmentor_t;
/**
* @brief Create segmentor's handle
@ -29,28 +32,28 @@ typedef struct mm_segment_t {
* \ref mmdeploy_model_create_by_path or \ref mmdeploy_model_create in \ref model.h
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle instance of a segmentor, which must be destroyed
* @param[out] segmentor instance of a segmentor, which must be destroyed
* by \ref mmdeploy_segmentor_destroy
* @return status of creating segmentor's handle
*/
MMDEPLOY_API int mmdeploy_segmentor_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle);
MMDEPLOY_API int mmdeploy_segmentor_create(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_segmentor_t* segmentor);
/**
* @brief Create segmentor's handle
* @param[in] model_path path of mmsegmentation sdk model exported by mmdeploy model converter
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle instance of a segmentor, which must be destroyed
* @param[out] segmentor instance of a segmentor, which must be destroyed
* by \ref mmdeploy_segmentor_destroy
* @return status of creating segmentor's handle
*/
MMDEPLOY_API int mmdeploy_segmentor_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle);
int device_id, mmdeploy_segmentor_t* segmentor);
/**
* @brief Apply segmentor to batch images and get their inference results
* @param[in] handle segmentor's handle created by \ref mmdeploy_segmentor_create_by_path or \ref
* @param[in] segmentor segmentor's handle created by \ref mmdeploy_segmentor_create_by_path or \ref
* mmdeploy_segmentor_create
* @param[in] mats a batch of images
* @param[in] mat_count number of images in the batch
@ -58,39 +61,41 @@ MMDEPLOY_API int mmdeploy_segmentor_create_by_path(const char* model_path, const
* image. It must be released by \ref mmdeploy_segmentor_release_result
* @return status of inference
*/
MMDEPLOY_API int mmdeploy_segmentor_apply(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
mm_segment_t** results);
MMDEPLOY_API int mmdeploy_segmentor_apply(mmdeploy_segmentor_t segmentor,
const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_segmentation_t** results);
/**
* @brief Release result buffer returned by \ref mmdeploy_segmentor_apply
* @param[in] results result buffer
* @param[in] count length of \p results
*/
MMDEPLOY_API void mmdeploy_segmentor_release_result(mm_segment_t* results, int count);
MMDEPLOY_API void mmdeploy_segmentor_release_result(mmdeploy_segmentation_t* results, int count);
/**
* @brief Destroy segmentor's handle
* @param[in] handle segmentor's handle created by \ref mmdeploy_segmentor_create_by_path
* @param[in] segmentor segmentor's handle created by \ref mmdeploy_segmentor_create_by_path
*/
MMDEPLOY_API void mmdeploy_segmentor_destroy(mm_handle_t handle);
MMDEPLOY_API void mmdeploy_segmentor_destroy(mmdeploy_segmentor_t segmentor);
/******************************************************************************
* Experimental asynchronous APIs */
MMDEPLOY_API int mmdeploy_segmentor_create_v2(mm_model_t model, const char* device_name,
MMDEPLOY_API int mmdeploy_segmentor_create_v2(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mm_handle_t* handle);
mmdeploy_segmentor_t* segmentor);
MMDEPLOY_API int mmdeploy_segmentor_create_input(const mm_mat_t* mats, int mat_count,
MMDEPLOY_API int mmdeploy_segmentor_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* value);
MMDEPLOY_API int mmdeploy_segmentor_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
MMDEPLOY_API int mmdeploy_segmentor_apply_v2(mmdeploy_segmentor_t segmentor, mmdeploy_value_t input,
mmdeploy_value_t* output);
MMDEPLOY_API int mmdeploy_segmentor_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_segmentor_apply_async(mmdeploy_segmentor_t segmentor,
mmdeploy_sender_t input, mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_segmentor_get_result(mmdeploy_value_t output, mm_segment_t** results);
MMDEPLOY_API int mmdeploy_segmentor_get_result(mmdeploy_value_t output,
mmdeploy_segmentation_t** results);
#ifdef __cplusplus
}

View File

@ -4,14 +4,14 @@
#include <numeric>
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/executor_internal.h"
#include "mmdeploy/apis/c/model.h"
#include "mmdeploy/apis/c/pipeline.h"
#include "common_internal.h"
#include "executor_internal.h"
#include "mmdeploy/codebase/mmocr/mmocr.h"
#include "mmdeploy/core/model.h"
#include "mmdeploy/core/status_code.h"
#include "mmdeploy/core/utils/formatter.h"
#include "model.h"
#include "pipeline.h"
using namespace std;
using namespace mmdeploy;
@ -43,72 +43,76 @@ const Value& config_template() {
// clang-format on
}
int mmdeploy_text_detector_create_impl(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
int mmdeploy_text_detector_create_impl(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mmdeploy_text_detector_t* detector) {
auto config = config_template();
config["pipeline"]["tasks"][0]["params"]["model"] = *static_cast<Model*>(model);
config["pipeline"]["tasks"][0]["params"]["model"] = *Cast(model);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info, handle);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info,
(mmdeploy_pipeline_t*)detector);
}
} // namespace
int mmdeploy_text_detector_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle) {
return mmdeploy_text_detector_create_impl(model, device_name, device_id, nullptr, handle);
int mmdeploy_text_detector_create(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_text_detector_t* detector) {
return mmdeploy_text_detector_create_impl(model, device_name, device_id, nullptr, detector);
}
int mmdeploy_text_detector_create_v2(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
return mmdeploy_text_detector_create_impl(model, device_name, device_id, exec_info, handle);
int mmdeploy_text_detector_create_v2(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info,
mmdeploy_text_detector_t* detector) {
return mmdeploy_text_detector_create_impl(model, device_name, device_id, exec_info, detector);
}
int mmdeploy_text_detector_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle) {
mm_model_t model{};
int device_id, mmdeploy_text_detector_t* detector) {
mmdeploy_model_t model{};
if (auto ec = mmdeploy_model_create_by_path(model_path, &model)) {
return ec;
}
auto ec = mmdeploy_text_detector_create_impl(model, device_name, device_id, nullptr, handle);
auto ec = mmdeploy_text_detector_create_impl(model, device_name, device_id, nullptr, detector);
mmdeploy_model_destroy(model);
return ec;
}
int mmdeploy_text_detector_create_input(const mm_mat_t* mats, int mat_count,
int mmdeploy_text_detector_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* input) {
return mmdeploy_common_create_input(mats, mat_count, input);
}
int mmdeploy_text_detector_apply(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
mm_text_detect_t** results, int** result_count) {
int mmdeploy_text_detector_apply(mmdeploy_text_detector_t detector, const mmdeploy_mat_t* mats,
int mat_count, mmdeploy_text_detection_t** results,
int** result_count) {
wrapped<mmdeploy_value_t> input;
if (auto ec = mmdeploy_text_detector_create_input(mats, mat_count, input.ptr())) {
return ec;
}
wrapped<mmdeploy_value_t> output;
if (auto ec = mmdeploy_text_detector_apply_v2(handle, input, output.ptr())) {
if (auto ec = mmdeploy_text_detector_apply_v2(detector, input, output.ptr())) {
return ec;
}
if (auto ec = mmdeploy_text_detector_get_result(output, results, result_count)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
int mmdeploy_text_detector_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
int mmdeploy_text_detector_apply_v2(mmdeploy_text_detector_t detector, mmdeploy_value_t input,
mmdeploy_value_t* output) {
return mmdeploy_pipeline_apply(handle, input, output);
return mmdeploy_pipeline_apply((mmdeploy_pipeline_t)detector, input, output);
}
int mmdeploy_text_detector_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
int mmdeploy_text_detector_apply_async(mmdeploy_text_detector_t detector, mmdeploy_sender_t input,
mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async(handle, input, output);
return mmdeploy_pipeline_apply_async((mmdeploy_pipeline_t)detector, input, output);
}
int mmdeploy_text_detector_get_result(mmdeploy_value_t output, mm_text_detect_t** results,
int mmdeploy_text_detector_get_result(mmdeploy_value_t output, mmdeploy_text_detection_t** results,
int** result_count) {
if (!output || !results || !result_count) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
Value& value = reinterpret_cast<Value*>(output)->front();
@ -125,7 +129,8 @@ int mmdeploy_text_detector_get_result(mmdeploy_value_t output, mm_text_detect_t*
std::unique_ptr<int[]> result_count_data(new int[_result_count.size()]{});
std::copy(_result_count.begin(), _result_count.end(), result_count_data.get());
std::unique_ptr<mm_text_detect_t[]> result_data(new mm_text_detect_t[total]{});
std::unique_ptr<mmdeploy_text_detection_t[]> result_data(
new mmdeploy_text_detection_t[total]{});
auto result_ptr = result_data.get();
for (const auto& det_output : detector_outputs) {
@ -142,7 +147,7 @@ int mmdeploy_text_detector_get_result(mmdeploy_value_t output, mm_text_detect_t*
*result_count = result_count_data.release();
*results = result_data.release();
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("unhandled exception: {}", e.what());
@ -152,38 +157,42 @@ int mmdeploy_text_detector_get_result(mmdeploy_value_t output, mm_text_detect_t*
return 0;
}
void mmdeploy_text_detector_release_result(mm_text_detect_t* results, const int* result_count,
int count) {
void mmdeploy_text_detector_release_result(mmdeploy_text_detection_t* results,
const int* result_count, int count) {
delete[] results;
delete[] result_count;
}
void mmdeploy_text_detector_destroy(mm_handle_t handle) { mmdeploy_pipeline_destroy(handle); }
void mmdeploy_text_detector_destroy(mmdeploy_text_detector_t detector) {
mmdeploy_pipeline_destroy((mmdeploy_pipeline_t)detector);
}
int mmdeploy_text_detector_apply_async_v2(mm_handle_t handle, const mm_mat_t* imgs, int img_count,
int mmdeploy_text_detector_apply_async_v2(mmdeploy_text_detector_t detector,
const mmdeploy_mat_t* imgs, int img_count,
mmdeploy_text_detector_continue_t cont, void* context,
mmdeploy_sender_t* output) {
mmdeploy_sender_t result_sender{};
if (auto ec = mmdeploy_text_detector_apply_async_v3(handle, imgs, img_count, &result_sender)) {
if (auto ec = mmdeploy_text_detector_apply_async_v3(detector, imgs, img_count, &result_sender)) {
return ec;
}
if (auto ec = mmdeploy_text_detector_continue_async(result_sender, cont, context, output)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
int mmdeploy_text_detector_apply_async_v3(mm_handle_t handle, const mm_mat_t* imgs, int img_count,
int mmdeploy_text_detector_apply_async_v3(mmdeploy_text_detector_t detector,
const mmdeploy_mat_t* imgs, int img_count,
mmdeploy_sender_t* output) {
wrapped<mmdeploy_value_t> input_val;
if (auto ec = mmdeploy_text_detector_create_input(imgs, img_count, input_val.ptr())) {
return ec;
}
mmdeploy_sender_t input_sndr = mmdeploy_executor_just(input_val);
if (auto ec = mmdeploy_text_detector_apply_async(handle, input_sndr, output)) {
if (auto ec = mmdeploy_text_detector_apply_async(detector, input_sndr, output)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
int mmdeploy_text_detector_continue_async(mmdeploy_sender_t input,
@ -192,7 +201,7 @@ int mmdeploy_text_detector_continue_async(mmdeploy_sender_t input,
auto sender = Guard([&] {
return Take(
LetValue(Take(input), [fn = cont, context](Value& value) -> TypeErasedSender<Value> {
mm_text_detect_t* results{};
mmdeploy_text_detection_t* results{};
int* result_count{};
if (auto ec = mmdeploy_text_detector_get_result(Cast(&value), &results, &result_count)) {
return Just(Value());
@ -207,7 +216,7 @@ int mmdeploy_text_detector_continue_async(mmdeploy_sender_t input,
});
if (sender) {
*output = sender;
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}

View File

@ -10,15 +10,18 @@
#include "common.h"
#include "executor.h"
#include "model.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct mm_text_detect_t {
mm_pointf_t bbox[4]; ///< a text bounding box of which the vertex are in clock-wise
typedef struct mmdeploy_text_detection_t {
mmdeploy_point_t bbox[4]; ///< a text bounding box of which the vertex are in clock-wise
float score;
} mm_text_detect_t;
} mmdeploy_text_detection_t;
typedef struct mmdeploy_text_detector* mmdeploy_text_detector_t;
/**
* @brief Create text-detector's handle
@ -26,29 +29,29 @@ typedef struct mm_text_detect_t {
* \ref mmdeploy_model_create_by_path or \ref mmdeploy_model_create in \ref model.h
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle instance of a text-detector, which must be destroyed
* @param[out] detector instance of a text-detector, which must be destroyed
* by \ref mmdeploy_text_detector_destroy
* @return status of creating text-detector's handle
*/
MMDEPLOY_API int mmdeploy_text_detector_create(mm_model_t model, const char* device_name,
int device_id, mm_handle_t* handle);
MMDEPLOY_API int mmdeploy_text_detector_create(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_text_detector_t* detector);
/**
* @brief Create text-detector's handle
* @param[in] model_path path to text detection model
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device
* @param[out] handle instance of a text-detector, which must be destroyed
* @param[out] detector instance of a text-detector, which must be destroyed
* by \ref mmdeploy_text_detector_destroy
* @return status of creating text-detector's handle
*/
MMDEPLOY_API int mmdeploy_text_detector_create_by_path(const char* model_path,
const char* device_name, int device_id,
mm_handle_t* handle);
mmdeploy_text_detector_t* detector);
/**
* @brief Apply text-detector to batch images and get their inference results
* @param[in] handle text-detector's handle created by \ref mmdeploy_text_detector_create_by_path
* @param[in] detector text-detector's handle created by \ref mmdeploy_text_detector_create_by_path
* @param[in] mats a batch of images
* @param[in] mat_count number of images in the batch
* @param[out] results a linear buffer to save text detection results of each
@ -57,8 +60,9 @@ MMDEPLOY_API int mmdeploy_text_detector_create_by_path(const char* model_path,
* results of each image. It must be released by \ref mmdeploy_detector_release_result
* @return status of inference
*/
MMDEPLOY_API int mmdeploy_text_detector_apply(mm_handle_t handle, const mm_mat_t* mats,
int mat_count, mm_text_detect_t** results,
MMDEPLOY_API int mmdeploy_text_detector_apply(mmdeploy_text_detector_t detector,
const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_text_detection_t** results,
int** result_count);
/** @brief Release the inference result buffer returned by \ref mmdeploy_text_detector_apply
@ -66,15 +70,15 @@ MMDEPLOY_API int mmdeploy_text_detector_apply(mm_handle_t handle, const mm_mat_t
* @param[in] result_count \p results size buffer
* @param[in] count the length of buffer \p result_count
*/
MMDEPLOY_API void mmdeploy_text_detector_release_result(mm_text_detect_t* results,
MMDEPLOY_API void mmdeploy_text_detector_release_result(mmdeploy_text_detection_t* results,
const int* result_count, int count);
/**
* @brief Destroy text-detector's handle
* @param[in] handle text-detector's handle created by \ref mmdeploy_text_detector_create_by_path or
* \ref mmdeploy_text_detector_create
* @param[in] detector text-detector's handle created by \ref mmdeploy_text_detector_create_by_path
* or \ref mmdeploy_text_detector_create
*/
MMDEPLOY_API void mmdeploy_text_detector_destroy(mm_handle_t handle);
MMDEPLOY_API void mmdeploy_text_detector_destroy(mmdeploy_text_detector_t detector);
/******************************************************************************
* Experimental asynchronous APIs */
@ -83,9 +87,9 @@ MMDEPLOY_API void mmdeploy_text_detector_destroy(mm_handle_t handle);
* @brief Same as \ref mmdeploy_text_detector_create, but allows to control execution context of
* tasks via exec_info
*/
MMDEPLOY_API int mmdeploy_text_detector_create_v2(mm_model_t model, const char* device_name,
MMDEPLOY_API int mmdeploy_text_detector_create_v2(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mm_handle_t* handle);
mmdeploy_text_detector_t* detector);
/**
* @brief Pack text-detector inputs into mmdeploy_value_t
@ -93,23 +97,24 @@ MMDEPLOY_API int mmdeploy_text_detector_create_v2(mm_model_t model, const char*
* @param[in] mat_count number of images in the batch
* @return the created value
*/
MMDEPLOY_API int mmdeploy_text_detector_create_input(const mm_mat_t* mats, int mat_count,
MMDEPLOY_API int mmdeploy_text_detector_create_input(const mmdeploy_mat_t* mats, int mat_count,
mmdeploy_value_t* input);
/**
* @brief Same as \ref mmdeploy_text_detector_apply, but input and output are packed in \ref
* mmdeploy_value_t.
*/
MMDEPLOY_API int mmdeploy_text_detector_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
mmdeploy_value_t* output);
MMDEPLOY_API int mmdeploy_text_detector_apply_v2(mmdeploy_text_detector_t detector,
mmdeploy_value_t input, mmdeploy_value_t* output);
/**
* @brief Apply text-detector asynchronously
* @param[in] handle handle to the detector
* @param[in] detector handle to the detector
* @param[in] input input sender that will be consumed by the operation
* @return output sender
*/
MMDEPLOY_API int mmdeploy_text_detector_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
MMDEPLOY_API int mmdeploy_text_detector_apply_async(mmdeploy_text_detector_t detector,
mmdeploy_sender_t input,
mmdeploy_sender_t* output);
/**
@ -123,11 +128,12 @@ MMDEPLOY_API int mmdeploy_text_detector_apply_async(mm_handle_t handle, mmdeploy
* @return status of the operation
*/
MMDEPLOY_API
int mmdeploy_text_detector_get_result(mmdeploy_value_t output, mm_text_detect_t** results,
int mmdeploy_text_detector_get_result(mmdeploy_value_t output, mmdeploy_text_detection_t** results,
int** result_count);
typedef int (*mmdeploy_text_detector_continue_t)(mm_text_detect_t* results, int* result_count,
void* context, mmdeploy_sender_t* output);
typedef int (*mmdeploy_text_detector_continue_t)(mmdeploy_text_detection_t* results,
int* result_count, void* context,
mmdeploy_sender_t* output);
// MMDEPLOY_API int mmdeploy_text_detector_apply_async_v2(mm_handle_t handle, const mm_mat_t* imgs,
// int img_count,
@ -135,8 +141,9 @@ typedef int (*mmdeploy_text_detector_continue_t)(mm_text_detect_t* results, int*
// cont, void* context, mmdeploy_sender_t*
// output);
MMDEPLOY_API int mmdeploy_text_detector_apply_async_v3(mm_handle_t handle, const mm_mat_t* imgs,
int img_count, mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_text_detector_apply_async_v3(mmdeploy_text_detector_t detector,
const mmdeploy_mat_t* imgs, int img_count,
mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_text_detector_continue_async(mmdeploy_sender_t input,
mmdeploy_text_detector_continue_t cont,

View File

@ -4,10 +4,8 @@
#include <numeric>
#include "mmdeploy/apis/c/common_internal.h"
#include "mmdeploy/apis/c/executor_internal.h"
#include "mmdeploy/apis/c/model.h"
#include "mmdeploy/apis/c/pipeline.h"
#include "common_internal.h"
#include "executor_internal.h"
#include "mmdeploy/archive/value_archive.h"
#include "mmdeploy/codebase/mmocr/mmocr.h"
#include "mmdeploy/core/device.h"
@ -16,6 +14,8 @@
#include "mmdeploy/core/status_code.h"
#include "mmdeploy/core/utils/formatter.h"
#include "mmdeploy/core/value.h"
#include "model.h"
#include "pipeline.h"
using namespace mmdeploy;
@ -65,47 +65,52 @@ const Value& config_template() {
return v;
}
int mmdeploy_text_recognizer_create_impl(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
int mmdeploy_text_recognizer_create_impl(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mmdeploy_text_recognizer_t* recognizer) {
auto config = config_template();
config["pipeline"]["tasks"][2]["params"]["model"] = *static_cast<Model*>(model);
config["pipeline"]["tasks"][2]["params"]["model"] = *Cast(model);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info, handle);
return mmdeploy_pipeline_create(Cast(&config), device_name, device_id, exec_info,
(mmdeploy_pipeline_t*)recognizer);
}
} // namespace
int mmdeploy_text_recognizer_create(mm_model_t model, const char* device_name, int device_id,
mm_handle_t* handle) {
return mmdeploy_text_recognizer_create_impl(model, device_name, device_id, nullptr, handle);
int mmdeploy_text_recognizer_create(mmdeploy_model_t model, const char* device_name, int device_id,
mmdeploy_text_recognizer_t* recognizer) {
return mmdeploy_text_recognizer_create_impl(model, device_name, device_id, nullptr, recognizer);
}
int mmdeploy_text_recognizer_create_v2(mm_model_t model, const char* device_name, int device_id,
mmdeploy_exec_info_t exec_info, mm_handle_t* handle) {
return mmdeploy_text_recognizer_create_impl(model, device_name, device_id, exec_info, handle);
int mmdeploy_text_recognizer_create_v2(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mmdeploy_text_recognizer_t* recognizer) {
return mmdeploy_text_recognizer_create_impl(model, device_name, device_id, exec_info, recognizer);
}
int mmdeploy_text_recognizer_create_by_path(const char* model_path, const char* device_name,
int device_id, mm_handle_t* handle) {
mm_model_t model{};
int device_id, mmdeploy_text_recognizer_t* recognizer) {
mmdeploy_model_t model{};
if (auto ec = mmdeploy_model_create_by_path(model_path, &model)) {
return ec;
}
auto ec = mmdeploy_text_recognizer_create_impl(model, device_name, device_id, nullptr, handle);
auto ec =
mmdeploy_text_recognizer_create_impl(model, device_name, device_id, nullptr, recognizer);
mmdeploy_model_destroy(model);
return ec;
}
int mmdeploy_text_recognizer_apply(mm_handle_t handle, const mm_mat_t* images, int count,
mm_text_recognize_t** results) {
return mmdeploy_text_recognizer_apply_bbox(handle, images, count, nullptr, nullptr, results);
int mmdeploy_text_recognizer_apply(mmdeploy_text_recognizer_t recognizer,
const mmdeploy_mat_t* images, int count,
mmdeploy_text_recognition_t** results) {
return mmdeploy_text_recognizer_apply_bbox(recognizer, images, count, nullptr, nullptr, results);
}
int mmdeploy_text_recognizer_create_input(const mm_mat_t* images, int image_count,
const mm_text_detect_t* bboxes, const int* bbox_count,
mmdeploy_value_t* output) {
int mmdeploy_text_recognizer_create_input(const mmdeploy_mat_t* images, int image_count,
const mmdeploy_text_detection_t* bboxes,
const int* bbox_count, mmdeploy_value_t* output) {
if (image_count && images == nullptr) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
Value::Array input_images;
@ -151,47 +156,49 @@ int mmdeploy_text_recognizer_create_input(const mm_mat_t* images, int image_coun
Value input{std::move(input_images), std::move(input_bboxes)};
*output = Take(std::move(input));
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
} catch (const std::exception& e) {
MMDEPLOY_ERROR("exception caught: {}", e.what());
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}
int mmdeploy_text_recognizer_apply_bbox(mm_handle_t handle, const mm_mat_t* mats, int mat_count,
const mm_text_detect_t* bboxes, const int* bbox_count,
mm_text_recognize_t** results) {
int mmdeploy_text_recognizer_apply_bbox(mmdeploy_text_recognizer_t recognizer,
const mmdeploy_mat_t* images, int image_count,
const mmdeploy_text_detection_t* bboxes,
const int* bbox_count,
mmdeploy_text_recognition_t** results) {
wrapped<mmdeploy_value_t> input;
if (auto ec =
mmdeploy_text_recognizer_create_input(mats, mat_count, bboxes, bbox_count, input.ptr())) {
if (auto ec = mmdeploy_text_recognizer_create_input(images, image_count, bboxes, bbox_count,
input.ptr())) {
return ec;
}
wrapped<mmdeploy_value_t> output;
if (auto ec = mmdeploy_text_recognizer_apply_v2(handle, input, output.ptr())) {
if (auto ec = mmdeploy_text_recognizer_apply_v2(recognizer, input, output.ptr())) {
return ec;
}
if (auto ec = mmdeploy_text_recognizer_get_result(output, results)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
int mmdeploy_text_recognizer_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
int mmdeploy_text_recognizer_apply_v2(mmdeploy_text_recognizer_t recognizer, mmdeploy_value_t input,
mmdeploy_value_t* output) {
return mmdeploy_pipeline_apply(handle, input, output);
return mmdeploy_pipeline_apply((mmdeploy_pipeline_t)recognizer, input, output);
}
int mmdeploy_text_recognizer_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async(handle, input, output);
int mmdeploy_text_recognizer_apply_async(mmdeploy_text_recognizer_t recognizer,
mmdeploy_sender_t input, mmdeploy_sender_t* output) {
return mmdeploy_pipeline_apply_async((mmdeploy_pipeline_t)recognizer, input, output);
}
MMDEPLOY_API int mmdeploy_text_recognizer_get_result(mmdeploy_value_t output,
mm_text_recognize_t** results) {
mmdeploy_text_recognition_t** results) {
if (!output || !results) {
return MM_E_INVALID_ARG;
return MMDEPLOY_E_INVALID_ARG;
}
try {
std::vector<std::vector<mmocr::TextRecognizerOutput>> recognizer_outputs;
@ -203,12 +210,12 @@ MMDEPLOY_API int mmdeploy_text_recognizer_get_result(mmdeploy_value_t output,
result_count += img_outputs.size();
}
auto deleter = [&](mm_text_recognize_t* p) {
auto deleter = [&](mmdeploy_text_recognition_t* p) {
mmdeploy_text_recognizer_release_result(p, static_cast<int>(result_count));
};
std::unique_ptr<mm_text_recognize_t[], decltype(deleter)> _results(
new mm_text_recognize_t[result_count]{}, deleter);
std::unique_ptr<mmdeploy_text_recognition_t[], decltype(deleter)> _results(
new mmdeploy_text_recognition_t[result_count]{}, deleter);
size_t result_idx = 0;
for (const auto& img_result : recognizer_outputs) {
@ -233,10 +240,10 @@ MMDEPLOY_API int mmdeploy_text_recognizer_get_result(mmdeploy_value_t output,
} catch (...) {
MMDEPLOY_ERROR("unknown exception caught");
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
void mmdeploy_text_recognizer_release_result(mm_text_recognize_t* results, int count) {
void mmdeploy_text_recognizer_release_result(mmdeploy_text_recognition_t* results, int count) {
for (int i = 0; i < count; ++i) {
delete[] results[i].score;
delete[] results[i].text;
@ -244,21 +251,24 @@ void mmdeploy_text_recognizer_release_result(mm_text_recognize_t* results, int c
delete[] results;
}
void mmdeploy_text_recognizer_destroy(mm_handle_t handle) { mmdeploy_pipeline_destroy(handle); }
void mmdeploy_text_recognizer_destroy(mmdeploy_text_recognizer_t recognizer) {
mmdeploy_pipeline_destroy((mmdeploy_pipeline_t)recognizer);
}
int mmdeploy_text_recognizer_apply_async_v3(mm_handle_t handle, const mm_mat_t* imgs, int img_count,
const mm_text_detect_t* bboxes, const int* bbox_count,
mmdeploy_sender_t* output) {
int mmdeploy_text_recognizer_apply_async_v3(mmdeploy_text_recognizer_t recognizer,
const mmdeploy_mat_t* imgs, int img_count,
const mmdeploy_text_detection_t* bboxes,
const int* bbox_count, mmdeploy_sender_t* output) {
wrapped<mmdeploy_value_t> input_val;
if (auto ec = mmdeploy_text_recognizer_create_input(imgs, img_count, bboxes, bbox_count,
input_val.ptr())) {
return ec;
}
mmdeploy_sender_t input_sndr = mmdeploy_executor_just(input_val);
if (auto ec = mmdeploy_text_recognizer_apply_async(handle, input_sndr, output)) {
if (auto ec = mmdeploy_text_recognizer_apply_async(recognizer, input_sndr, output)) {
return ec;
}
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
int mmdeploy_text_recognizer_continue_async(mmdeploy_sender_t input,
@ -267,7 +277,7 @@ int mmdeploy_text_recognizer_continue_async(mmdeploy_sender_t input,
auto sender = Guard([&] {
return Take(
LetValue(Take(input), [fn = cont, context](Value& value) -> TypeErasedSender<Value> {
mm_text_recognize_t* results{};
mmdeploy_text_recognition_t* results{};
if (auto ec = mmdeploy_text_recognizer_get_result(Cast(&value), &results)) {
return Just(Value());
}
@ -281,7 +291,7 @@ int mmdeploy_text_recognizer_continue_async(mmdeploy_sender_t input,
});
if (sender) {
*output = sender;
return MM_SUCCESS;
return MMDEPLOY_SUCCESS;
}
return MM_E_FAIL;
return MMDEPLOY_E_FAIL;
}

View File

@ -16,11 +16,13 @@
extern "C" {
#endif
typedef struct mm_text_recognize_t {
typedef struct mmdeploy_text_recognition_t {
char* text;
float* score;
int length;
} mm_text_recognize_t;
} mmdeploy_text_recognition_t;
typedef struct mmdeploy_text_recognizer* mmdeploy_text_recognizer_t;
/**
* @brief Create a text recognizer instance
@ -28,29 +30,30 @@ typedef struct mm_text_recognize_t {
* \ref mmdeploy_model_create_by_path or \ref mmdeploy_model_create in \ref model.h
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle handle of the created text recognizer, which must be destroyed
* @param[out] recognizer handle of the created text recognizer, which must be destroyed
* by \ref mmdeploy_text_recognizer_destroy
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_text_recognizer_create(mm_model_t model, const char* device_name,
int device_id, mm_handle_t* handle);
MMDEPLOY_API int mmdeploy_text_recognizer_create(mmdeploy_model_t model, const char* device_name,
int device_id,
mmdeploy_text_recognizer_t* recognizer);
/**
* @brief Create a text recognizer instance
* @param[in] model_path path to text recognition model
* @param[in] device_name name of device, such as "cpu", "cuda", etc.
* @param[in] device_id id of device.
* @param[out] handle handle of the created text recognizer, which must be destroyed
* @param[out] recognizer handle of the created text recognizer, which must be destroyed
* by \ref mmdeploy_text_recognizer_destroy
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_text_recognizer_create_by_path(const char* model_path,
const char* device_name, int device_id,
mm_handle_t* handle);
mmdeploy_text_recognizer_t* recognizer);
/**
* @brief Apply text recognizer to a batch of text images
* @param[in] handle text recognizer's handle created by \ref
* @param[in] recognizer text recognizer's handle created by \ref
* mmdeploy_text_recognizer_create_by_path
* @param[in] images a batch of text images
* @param[in] count number of images in the batch
@ -58,12 +61,13 @@ MMDEPLOY_API int mmdeploy_text_recognizer_create_by_path(const char* model_path,
* by \ref mmdeploy_text_recognizer_release_result
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_text_recognizer_apply(mm_handle_t handle, const mm_mat_t* images,
int count, mm_text_recognize_t** results);
MMDEPLOY_API int mmdeploy_text_recognizer_apply(mmdeploy_text_recognizer_t recognizer,
const mmdeploy_mat_t* images, int count,
mmdeploy_text_recognition_t** results);
/**
* @brief Apply text recognizer to a batch of images supplied with text bboxes
* @param[in] handle text recognizer's handle created by \ref
* @param[in] recognizer text recognizer's handle created by \ref
* mmdeploy_text_recognizer_create_by_path
* @param[in] images a batch of text images
* @param[in] image_count number of images in the batch
@ -73,25 +77,26 @@ MMDEPLOY_API int mmdeploy_text_recognizer_apply(mm_handle_t handle, const mm_mat
* bboxes, must be release by \ref mmdeploy_text_recognizer_release_result
* @return status code of the operation
*/
MMDEPLOY_API int mmdeploy_text_recognizer_apply_bbox(mm_handle_t handle, const mm_mat_t* images,
int image_count,
const mm_text_detect_t* bboxes,
MMDEPLOY_API int mmdeploy_text_recognizer_apply_bbox(mmdeploy_text_recognizer_t recognizer,
const mmdeploy_mat_t* images, int image_count,
const mmdeploy_text_detection_t* bboxes,
const int* bbox_count,
mm_text_recognize_t** results);
mmdeploy_text_recognition_t** results);
/** @brief Release result buffer returned by \ref mmdeploy_text_recognizer_apply or \ref
* mmdeploy_text_recognizer_apply_bbox
* @param[in] results result buffer by text recognizer
* @param[in] count length of \p result
*/
MMDEPLOY_API void mmdeploy_text_recognizer_release_result(mm_text_recognize_t* results, int count);
MMDEPLOY_API void mmdeploy_text_recognizer_release_result(mmdeploy_text_recognition_t* results,
int count);
/**
* @brief destroy text recognizer
* @param[in] handle handle of text recognizer created by \ref
* @param[in] recognizer handle of text recognizer created by \ref
* mmdeploy_text_recognizer_create_by_path or \ref mmdeploy_text_recognizer_create
*/
MMDEPLOY_API void mmdeploy_text_recognizer_destroy(mm_handle_t handle);
MMDEPLOY_API void mmdeploy_text_recognizer_destroy(mmdeploy_text_recognizer_t recognizer);
/******************************************************************************
* Experimental asynchronous APIs */
@ -100,9 +105,9 @@ MMDEPLOY_API void mmdeploy_text_recognizer_destroy(mm_handle_t handle);
* @brief Same as \ref mmdeploy_text_recognizer_create, but allows to control execution context of
* tasks via exec_info
*/
MMDEPLOY_API int mmdeploy_text_recognizer_create_v2(mm_model_t model, const char* device_name,
MMDEPLOY_API int mmdeploy_text_recognizer_create_v2(mmdeploy_model_t model, const char* device_name,
int device_id, mmdeploy_exec_info_t exec_info,
mm_handle_t* handle);
mmdeploy_text_recognizer_t* recognizer);
/**
* @brief Pack text-recognizer inputs into mmdeploy_value_t
@ -112,27 +117,30 @@ MMDEPLOY_API int mmdeploy_text_recognizer_create_v2(mm_model_t model, const char
* @param[in] bbox_count number of bboxes of each \p images, must be same length as \p images
* @return value created
*/
MMDEPLOY_API int mmdeploy_text_recognizer_create_input(const mm_mat_t* images, int image_count,
const mm_text_detect_t* bboxes,
MMDEPLOY_API int mmdeploy_text_recognizer_create_input(const mmdeploy_mat_t* images,
int image_count,
const mmdeploy_text_detection_t* bboxes,
const int* bbox_count,
mmdeploy_value_t* output);
MMDEPLOY_API int mmdeploy_text_recognizer_apply_v2(mm_handle_t handle, mmdeploy_value_t input,
MMDEPLOY_API int mmdeploy_text_recognizer_apply_v2(mmdeploy_text_recognizer_t recognizer,
mmdeploy_value_t input,
mmdeploy_value_t* output);
/**
* @brief Same as \ref mmdeploy_text_recognizer_apply_bbox, but input and output are packed in \ref
* mmdeploy_value_t.
*/
MMDEPLOY_API int mmdeploy_text_recognizer_apply_async(mm_handle_t handle, mmdeploy_sender_t input,
MMDEPLOY_API int mmdeploy_text_recognizer_apply_async(mmdeploy_text_recognizer_t recognizer,
mmdeploy_sender_t input,
mmdeploy_sender_t* output);
typedef int (*mmdeploy_text_recognizer_continue_t)(mm_text_recognize_t* results, void* context,
mmdeploy_sender_t* output);
typedef int (*mmdeploy_text_recognizer_continue_t)(mmdeploy_text_recognition_t* results,
void* context, mmdeploy_sender_t* output);
MMDEPLOY_API int mmdeploy_text_recognizer_apply_async_v3(mm_handle_t handle, const mm_mat_t* imgs,
int img_count,
const mm_text_detect_t* bboxes,
MMDEPLOY_API int mmdeploy_text_recognizer_apply_async_v3(mmdeploy_text_recognizer_t recognizer,
const mmdeploy_mat_t* imgs, int img_count,
const mmdeploy_text_detection_t* bboxes,
const int* bbox_count,
mmdeploy_sender_t* output);
@ -147,7 +155,7 @@ MMDEPLOY_API int mmdeploy_text_recognizer_continue_async(mmdeploy_sender_t input
* @return status of the operation
*/
MMDEPLOY_API int mmdeploy_text_recognizer_get_result(mmdeploy_value_t output,
mm_text_recognize_t** results);
mmdeploy_text_recognition_t** results);
#ifdef __cplusplus
}

View File

@ -106,7 +106,7 @@ namespace MMDeploy
private unsafe void FormatResult(int matCount, int* resultCount, Label* results, ref List<ClassifierOutput> output, out int total)
{
total = 0;
total = matCount;
for (int i = 0; i < matCount; i++)
{
ClassifierOutput outi = default;
@ -114,7 +114,6 @@ namespace MMDeploy
{
outi.Add(results->Id, results->Score);
results++;
total++;
}
output.Add(outi);

View File

@ -205,7 +205,7 @@ namespace MMDeploy
private unsafe void FormatResult(int matCount, int* resultCount, CDetect* results, ref List<DetectorOutput> output, out int total)
{
total = 0;
total = matCount;
for (int i = 0; i < matCount; i++)
{
DetectorOutput outi = default;
@ -213,7 +213,6 @@ namespace MMDeploy
{
outi.Add(results);
results++;
total++;
}
output.Add(outi);

View File

@ -184,7 +184,7 @@ namespace MMDeploy
private unsafe void FormatResult(int matCount, int* resultCount, TextDetect* results, ref List<TextDetectorOutput> output, out int total)
{
total = 0;
total = matCount;
for (int i = 0; i < matCount; i++)
{
TextDetectorOutput outi = default;
@ -192,7 +192,6 @@ namespace MMDeploy
{
outi.Add(results);
results++;
total++;
}
output.Add(outi);

View File

@ -14,10 +14,10 @@
</PropertyGroup>
<PropertyGroup>
<MMDeployExternalNativeDlls>$(MSBuildThisFileDirectory)\..\..\..\..</MMDeployExternalNativeDlls>
<MMDeployNativeDlls>$(MSBuildThisFileDirectory)\..\..\..\..\..</MMDeployNativeDlls>
</PropertyGroup>
<ItemGroup>
<Content CopyToOutputDirectory="PreserveNewest" Include="$(MMDeployExternalNativeDlls)\build\bin\Release\MMDeployExtern.dll" Pack="true" PackagePath="runtimes\win-x64\native\MMDeployExtern.dll" />
<Content CopyToOutputDirectory="PreserveNewest" Include="$(MMDeployNativeDlls)\build\bin\Release\mmdeploy.dll" Pack="true" PackagePath="runtimes\win-x64\native\mmdeploy.dll" />
</ItemGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">

View File

@ -5,6 +5,6 @@
/// </summary>
internal static partial class NativeMethods
{
public const string DllExtern = "MMDeployExtern";
public const string DllExtern = "mmdeploy";
}
}

View File

@ -25,7 +25,7 @@ To use the nuget package, you also need to download the backend dependencies. Fo
Before building the c# api, you need to build sdk first. Please follow this [tutorial](../../../docs/en/build/windows.md)/[教程](../../../docs/zh_cn/build/windows.md) to build sdk. Remember to set the MMDEPLOY_BUILD_SDK_CSHARP_API option to ON. We recommend setting `MMDEPLOY_SHARED_LIBS` to OFF and use the static third party libraries(pplcv, opencv, etc.). If so, you only need add the backend dependencies to your system path, or you need to add all dependencies.
If you follow the tutorial, the MMDeployExtern.dll will be built in `build\bin\release`. Make sure the expected dll is in that path or the next step will throw a file-not-exist error.
If you follow the tutorial, the mmdeploy.dll will be built in `build\bin\release`. Make sure the expected dll is in that path or the next step will throw a file-not-exist error.
**Step 1.** Build MMDeploy nuget package.

View File

@ -0,0 +1,28 @@
# Copyright (c) OpenMMLab. All rights reserved.
cmake_minimum_required(VERSION 3.14)
project(mmdeploy_cxx_api)
if (MMDEPLOY_BUILD_SDK_CXX_API)
add_library(${PROJECT_NAME} INTERFACE)
target_include_directories(${PROJECT_NAME} INTERFACE
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>
$<INSTALL_INTERFACE:include>)
target_compile_features(${PROJECT_NAME} INTERFACE cxx_std_17)
target_link_libraries(${PROJECT_NAME} INTERFACE mmdeploy::core)
foreach (task ${MMDEPLOY_TASKS})
target_link_libraries(mmdeploy_${task} INTERFACE ${PROJECT_NAME})
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/mmdeploy/${task}.hpp
DESTINATION include/mmdeploy)
endforeach ()
if (TARGET mmdeploy)
target_link_libraries(mmdeploy INTERFACE ${PROJECT_NAME})
endif ()
mmdeploy_export(${PROJECT_NAME})
install(FILES ${CMAKE_CURRENT_SOURCE_DIR}/mmdeploy/common.hpp
DESTINATION include/mmdeploy)
install(DIRECTORY ${CMAKE_SOURCE_DIR}/demo/csrc/ DESTINATION example/cpp
FILES_MATCHING
PATTERN "*.cxx"
)
endif ()

View File

@ -0,0 +1,67 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_CLASSIFIER_HPP_
#define MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_CLASSIFIER_HPP_
#include "mmdeploy/classifier.h"
#include "mmdeploy/common.hpp"
namespace mmdeploy {
using Classification = mmdeploy_classification_t;
class Classifier : public NonMovable {
public:
Classifier(const Model& model, const Device& device) {
auto ec = mmdeploy_classifier_create(model, device.name(), device.index(), &classifier_);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
}
~Classifier() {
if (classifier_) {
mmdeploy_classifier_destroy(classifier_);
classifier_ = {};
}
}
using Result = Result_<Classification>;
std::vector<Result> Apply(Span<const Mat> images) {
if (images.empty()) {
return {};
}
Classification* results{};
int* result_count{};
auto ec = mmdeploy_classifier_apply(classifier_, reinterpret(images.data()),
static_cast<int>(images.size()), &results, &result_count);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
std::vector<Result> rets;
rets.reserve(images.size());
std::shared_ptr<Classification> data(results, [result_count, count = images.size()](auto p) {
mmdeploy_classifier_release_result(p, result_count, count);
});
size_t offset = 0;
for (size_t i = 0; i < images.size(); ++i) {
offset += rets.emplace_back(offset, result_count[i], data).size();
}
return rets;
}
Result Apply(const Mat& img) { return Apply(Span{img})[0]; }
private:
mmdeploy_classifier_t classifier_{};
};
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_CLASSIFIER_HPP_

View File

@ -0,0 +1,145 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_COMMON_H_
#define MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_COMMON_H_
#include <memory>
#include <type_traits>
#include <utility>
#include "mmdeploy/common.h"
#include "mmdeploy/core/mpl/span.h"
#include "mmdeploy/core/status_code.h"
#include "mmdeploy/core/types.h"
#include "mmdeploy/model.h"
#ifndef MMDEPLOY_CXX_USE_OPENCV
#define MMDEPLOY_CXX_USE_OPENCV 1
#endif
#if MMDEPLOY_CXX_USE_OPENCV
#include "opencv2/core/core.hpp"
#endif
namespace mmdeploy {
using Rect = mmdeploy_rect_t;
namespace { // avoid conflict with internal classes, for now
class Model {
public:
explicit Model(const char* path) {
mmdeploy_model_t model{};
auto ec = mmdeploy_model_create_by_path(path, &model);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
model_.reset(model, [](auto p) { mmdeploy_model_destroy(p); });
}
Model(const void* buffer, size_t size) {
mmdeploy_model_t model{};
auto ec = mmdeploy_model_create(buffer, static_cast<int>(size), &model);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
model_.reset(model, [](auto p) { mmdeploy_model_destroy(p); });
}
operator mmdeploy_model_t() const noexcept { return model_.get(); }
private:
std::shared_ptr<mmdeploy_model> model_{};
};
class Device {
public:
explicit Device(std::string name, int index = 0) : name_(std::move(name)), index_(index) {}
const char* name() const noexcept { return name_.c_str(); }
int index() const noexcept { return index_; }
private:
std::string name_;
int index_;
};
class Mat {
public:
Mat() : desc_{} {}
Mat(int height, int width, int channels, mmdeploy_pixel_format_t format,
mmdeploy_data_type_t type, uint8_t* data)
: desc_{data, height, width, channels, format, type} {}
const mmdeploy_mat_t& desc() const noexcept { return desc_; }
#if MMDEPLOY_CXX_USE_OPENCV
Mat(const cv::Mat& mat, mmdeploy_pixel_format_t pixel_format)
: desc_{mat.data, mat.rows, mat.cols, mat.channels(), pixel_format, GetCvType(mat.depth())} {
if (pixel_format == MMDEPLOY_PIXEL_FORMAT_COUNT) {
throw_exception(eNotSupported);
}
if (desc_.type == MMDEPLOY_DATA_TYPE_COUNT) {
throw_exception(eNotSupported);
}
}
Mat(const cv::Mat& mat) : Mat(mat, GetCvFormat(mat.channels())) {}
static mmdeploy_data_type_t GetCvType(int depth) {
switch (depth) {
case CV_8U:
return MMDEPLOY_DATA_TYPE_UINT8;
case CV_32F:
return MMDEPLOY_DATA_TYPE_FLOAT;
default:
return MMDEPLOY_DATA_TYPE_COUNT;
}
}
static mmdeploy_pixel_format_t GetCvFormat(int channels) {
switch (channels) {
case 1:
return MMDEPLOY_PIXEL_FORMAT_GRAYSCALE;
case 3:
return MMDEPLOY_PIXEL_FORMAT_BGR;
case 4:
return MMDEPLOY_PIXEL_FORMAT_BGRA;
default:
return MMDEPLOY_PIXEL_FORMAT_COUNT;
}
}
#endif
private:
mmdeploy_mat_t desc_;
};
template <typename T>
class Result_ {
public:
Result_(size_t offset, size_t size, std::shared_ptr<T> data)
: offset_(offset), size_(size), data_(std::move(data)) {}
T& operator[](size_t index) const noexcept { return *(data_.get() + offset_ + index); }
size_t size() const noexcept { return size_; }
T* begin() const noexcept { return data_.get() + offset_; }
T* end() const noexcept { return begin() + size_; }
T* operator->() const noexcept { return data_.get(); }
T& operator*() const noexcept { return *data_; }
private:
size_t offset_;
size_t size_;
std::shared_ptr<T> data_;
};
inline const mmdeploy_mat_t* reinterpret(const Mat* p) {
return reinterpret_cast<const mmdeploy_mat_t*>(p);
}
} // namespace
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_COMMON_H_

View File

@ -0,0 +1,67 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_DETECTOR_HPP_
#define MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_DETECTOR_HPP_
#include "mmdeploy/common.hpp"
#include "mmdeploy/detector.h"
namespace mmdeploy {
using Detection = mmdeploy_detection_t;
class Detector : public NonMovable {
public:
Detector(const Model& model, const Device& device) {
auto ec = mmdeploy_detector_create(model, device.name(), device.index(), &detector_);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
}
~Detector() {
if (detector_) {
mmdeploy_detector_destroy(detector_);
detector_ = {};
}
}
using Result = Result_<Detection>;
std::vector<Result> Apply(Span<const Mat> images) {
if (images.empty()) {
return {};
}
Detection* results{};
int* result_count{};
auto ec = mmdeploy_detector_apply(detector_, reinterpret(images.data()),
static_cast<int>(images.size()), &results, &result_count);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
std::shared_ptr<Detection> data(results, [result_count, count = images.size()](auto p) {
mmdeploy_detector_release_result(p, result_count, count);
});
std::vector<Result> rets;
rets.reserve(images.size());
size_t offset = 0;
for (size_t i = 0; i < images.size(); ++i) {
offset += rets.emplace_back(offset, result_count[i], data).size();
}
return rets;
}
Result Apply(const Mat& image) { return Apply(Span{image})[0]; }
private:
mmdeploy_detector_t detector_{};
};
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_DETECTOR_HPP_

View File

@ -0,0 +1,78 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_MMDEPLOY_POSE_DETECTOR_HPP_
#define MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_MMDEPLOY_POSE_DETECTOR_HPP_
#include "mmdeploy/common.hpp"
#include "mmdeploy/pose_detector.h"
namespace mmdeploy {
using PoseDetection = mmdeploy_pose_detection_t;
class PoseDetector : public NonMovable {
public:
PoseDetector(const Model& model, const Device& device) {
auto ec = mmdeploy_pose_detector_create(model, device.name(), device.index(), &detector_);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
}
~PoseDetector() {
if (detector_) {
mmdeploy_pose_detector_destroy(detector_);
detector_ = {};
}
}
using Result = Result_<PoseDetection>;
std::vector<Result> Apply(Span<const Mat> images, Span<const Rect> bboxes,
Span<const int> bbox_count) {
if (images.empty()) {
return {};
}
const mmdeploy_rect_t* p_bboxes{};
const int* p_bbox_count{};
if (!bboxes.empty()) {
p_bboxes = bboxes.data();
p_bbox_count = bbox_count.data();
}
PoseDetection* results{};
auto ec = mmdeploy_pose_detector_apply_bbox(detector_, reinterpret(images.data()),
static_cast<int>(images.size()), p_bboxes,
p_bbox_count, &results);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
std::shared_ptr<PoseDetection> data(results, [count = images.size()](auto p) {
mmdeploy_pose_detector_release_result(p, count);
});
std::vector<Result> rets;
rets.reserve(images.size());
size_t offset = 0;
for (size_t i = 0; i < images.size(); ++i) {
offset += rets.emplace_back(offset, bboxes.empty() ? 1 : bbox_count[i], data).size();
}
return rets;
}
Result Apply(const Mat& image, Span<const Rect> bboxes = {}) {
return Apply(Span{image}, bboxes, {static_cast<int>(bboxes.size())})[0];
}
private:
mmdeploy_pose_detector_t detector_{};
};
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_MMDEPLOY_POSE_DETECTOR_HPP_

View File

@ -0,0 +1,62 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_RESTORER_HPP_
#define MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_RESTORER_HPP_
#include "mmdeploy/common.hpp"
#include "mmdeploy/restorer.h"
namespace mmdeploy {
class Restorer : public NonMovable {
public:
Restorer(const Model& model, const Device& device) {
auto ec = mmdeploy_restorer_create(model, device.name(), device.index(), &restorer_);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
}
~Restorer() {
if (restorer_) {
mmdeploy_restorer_destroy(restorer_);
restorer_ = {};
}
}
using Result = Result_<mmdeploy_mat_t>;
std::vector<Result> Apply(Span<const Mat> images) {
if (images.empty()) {
return {};
}
mmdeploy_mat_t* results{};
auto ec = mmdeploy_restorer_apply(restorer_, reinterpret(images.data()),
static_cast<int>(images.size()), &results);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
std::vector<Result> rets;
rets.reserve(images.size());
std::shared_ptr<mmdeploy_mat_t> data(
results, [count = images.size()](auto p) { mmdeploy_restorer_release_result(p, count); });
for (size_t i = 0; i < images.size(); ++i) {
rets.emplace_back(i, 1, data);
}
return rets;
}
Result Apply(const Mat& image) { return Apply(Span{image})[0]; }
private:
mmdeploy_restorer_t restorer_{};
};
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_RESTORER_HPP_

View File

@ -0,0 +1,68 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_MMDEPLOY_ROTATED_DETECTOR_HPP_
#define MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_MMDEPLOY_ROTATED_DETECTOR_HPP_
#include "mmdeploy/common.hpp"
#include "mmdeploy/rotated_detector.h"
namespace mmdeploy {
using RotatedDetection = mmdeploy_rotated_detection_t;
class RotatedDetector : public NonMovable {
public:
RotatedDetector(const Model& model, const Device& device) {
auto ec = mmdeploy_rotated_detector_create(model, device.name(), device.index(), &detector_);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
}
~RotatedDetector() {
if (detector_) {
mmdeploy_rotated_detector_destroy(detector_);
detector_ = {};
}
}
using Result = Result_<RotatedDetection>;
std::vector<Result> Apply(Span<const Mat> images) {
if (images.empty()) {
return {};
}
RotatedDetection* results{};
int* result_count{};
auto ec =
mmdeploy_rotated_detector_apply(detector_, reinterpret(images.data()),
static_cast<int>(images.size()), &results, &result_count);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
std::shared_ptr<RotatedDetection> data(results, [result_count](auto p) {
mmdeploy_rotated_detector_release_result(p, result_count);
});
std::vector<Result> rets;
rets.reserve(images.size());
size_t offset = 0;
for (size_t i = 0; i < images.size(); ++i) {
offset += rets.emplace_back(offset, result_count[i], data).size();
}
return rets;
}
Result Apply(const Mat& image) { return Apply(Span{image})[0]; }
private:
mmdeploy_rotated_detector_t detector_{};
};
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_MMDEPLOY_ROTATED_DETECTOR_HPP_

View File

@ -0,0 +1,64 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_SEGMENTOR_HPP_
#define MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_SEGMENTOR_HPP_
#include "mmdeploy/common.hpp"
#include "mmdeploy/segmentor.h"
namespace mmdeploy {
using Segmentation = mmdeploy_segmentation_t;
class Segmentor : public NonMovable {
public:
Segmentor(const Model& model, const Device& device) {
auto ec = mmdeploy_segmentor_create(model, device.name(), device.index(), &segmentor_);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
}
~Segmentor() {
if (segmentor_) {
mmdeploy_segmentor_destroy(segmentor_);
segmentor_ = {};
}
}
using Result = Result_<Segmentation>;
std::vector<Result> Apply(Span<const Mat> images) {
if (images.empty()) {
return {};
}
Segmentation* results{};
auto ec = mmdeploy_segmentor_apply(segmentor_, reinterpret(images.data()),
static_cast<int>(images.size()), &results);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
std::vector<Result> rets;
rets.reserve(images.size());
std::shared_ptr<Segmentation> data(
results, [count = images.size()](auto p) { mmdeploy_segmentor_release_result(p, count); });
for (size_t i = 0; i < images.size(); ++i) {
rets.emplace_back(i, 1, data);
}
return rets;
}
Result Apply(const Mat& image) { return Apply(Span{image})[0]; }
private:
mmdeploy_segmentor_t segmentor_{};
};
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_SEGMENTOR_HPP_

View File

@ -0,0 +1,68 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_TEXT_DETECTOR_HPP_
#define MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_TEXT_DETECTOR_HPP_
#include "mmdeploy/common.hpp"
#include "mmdeploy/text_detector.h"
namespace mmdeploy {
using TextDetection = mmdeploy_text_detection_t;
class TextDetector : public NonMovable {
public:
TextDetector(const Model& model, const Device& device) {
auto ec = mmdeploy_text_detector_create(model, device.name(), device.index(), &detector_);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
}
~TextDetector() {
if (detector_) {
mmdeploy_text_detector_destroy(detector_);
detector_ = {};
}
}
using Result = Result_<TextDetection>;
std::vector<Result> Apply(Span<const Mat> images) {
if (images.empty()) {
return {};
}
TextDetection* results{};
int* result_count{};
auto ec =
mmdeploy_text_detector_apply(detector_, reinterpret(images.data()),
static_cast<int>(images.size()), &results, &result_count);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
std::shared_ptr<TextDetection> data(results, [result_count, count = images.size()](auto p) {
mmdeploy_text_detector_release_result(p, result_count, count);
});
std::vector<Result> rets;
rets.reserve(images.size());
size_t offset = 0;
for (size_t i = 0; i < images.size(); ++i) {
offset += rets.emplace_back(offset, result_count[i], data).size();
}
return rets;
}
Result Apply(const Mat& image) { return Apply(Span{image})[0]; }
private:
mmdeploy_text_detector_t detector_{};
};
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_TEXT_DETECTOR_HPP_

View File

@ -0,0 +1,79 @@
// Copyright (c) OpenMMLab. All rights reserved.
#ifndef MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_TEXT_RECOGNIZER_HPP_
#define MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_TEXT_RECOGNIZER_HPP_
#include "mmdeploy/common.hpp"
#include "mmdeploy/text_detector.hpp"
#include "mmdeploy/text_recognizer.h"
namespace mmdeploy {
using TextRecognition = mmdeploy_text_recognition_t;
class TextRecognizer : public NonMovable {
public:
TextRecognizer(const Model& model, const Device& device) {
auto ec = mmdeploy_text_recognizer_create(model, device.name(), device.index(), &recognizer_);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
}
~TextRecognizer() {
if (recognizer_) {
mmdeploy_text_recognizer_destroy(recognizer_);
recognizer_ = {};
}
}
using Result = Result_<TextRecognition>;
std::vector<Result> Apply(Span<const Mat> images, Span<const TextDetection> bboxes,
Span<const int> bbox_count) {
if (images.empty()) {
return {};
}
const TextDetection* p_bboxes{};
const int* p_bbox_count{};
if (!bboxes.empty()) {
p_bboxes = bboxes.data();
p_bbox_count = bbox_count.data();
}
TextRecognition* results{};
auto ec = mmdeploy_text_recognizer_apply_bbox(recognizer_, reinterpret(images.data()),
static_cast<int>(images.size()), p_bboxes,
p_bbox_count, &results);
if (ec != MMDEPLOY_SUCCESS) {
throw_exception(static_cast<ErrorCode>(ec));
}
std::shared_ptr<TextRecognition> data(results, [count = images.size()](auto p) {
mmdeploy_text_recognizer_release_result(p, count);
});
std::vector<Result> rets;
rets.reserve(images.size());
size_t offset = 0;
for (size_t i = 0; i < images.size(); ++i) {
offset += rets.emplace_back(offset, bboxes.empty() ? 1 : bbox_count[i], data).size();
}
return rets;
}
Result Apply(const Mat& image, Span<const TextDetection> bboxes = {}) {
return Apply(Span{image}, bboxes, {static_cast<int>(bboxes.size())})[0];
}
private:
mmdeploy_text_recognizer_t recognizer_{};
};
} // namespace mmdeploy
#endif // MMDEPLOY_CSRC_MMDEPLOY_APIS_CXX_TEXT_RECOGNIZER_HPP_

View File

@ -0,0 +1,23 @@
project(mmdeploy_java_package)
find_package(Java REQUIRED)
include(UseJava)
add_subdirectory(native)
add_jar(${PROJECT_NAME} SOURCES
mmdeploy/DataType.java
mmdeploy/Mat.java
mmdeploy/InstanceMask.java
mmdeploy/PixelFormat.java
mmdeploy/PointF.java
mmdeploy/Rect.java
mmdeploy/Classifier.java
mmdeploy/Detector.java
mmdeploy/Segmentor.java
mmdeploy/TextDetector.java
mmdeploy/TextRecognizer.java
mmdeploy/Restorer.java
mmdeploy/PoseDetector.java
OUTPUT_NAME mmdeploy
OUTPUT_DIR ${CMAKE_LIBRARY_OUTPUT_DIRECTORY})

View File

@ -0,0 +1,48 @@
# Build Java API
## From Source
### Requirements
- OpenJDK >= 10
**Step 1.** Download OpenJDK. Using OpenJDK-18 as example:
```bash
wget https://download.java.net/java/GA/jdk18/43f95e8614114aeaa8e8a5fcf20a682d/36/GPL/openjdk-18_linux-x64_bin.tar.gz
tar xvf openjdk-18_linux-x64_bin.tar.gz
```
**Step 2.** Setting environment variables:
```bash
export JAVA_HOME=${PWD}/jdk-18
export PATH=${JAVA_HOME}/bin:${PATH}
```
**Step 3.** Switch default Java version:
```bash
sudo update-alternatives --config java
sudo update-alternatives --config javac
```
You should select the version you will use.
### Installation
For using Java apis, you should build Java class and C++ SDK.
**Step 1.** Build Java class.
Build Java `.class` files.
```bash
cd csrc/mmdeploy/apis/java
javac mmdeploy/*.java
cd ../../../..
```
**Step 2.** Build SDK.
Build MMDeploy SDK. Please follow this [tutorial](../../../../docs/en/01-how-to-build/linux-x86_64.md)/[教程](../../../../docs/zh_cn/01-how-to-build/linux-x86_64.md) to build SDK. Remember to set the MMDEPLOY_BUILD_SDK_JAVA_API option to ON.

View File

@ -0,0 +1,54 @@
package mmdeploy;
public class Classifier {
static {
System.loadLibrary("mmdeploy_java");
}
private final long handle;
public static class Result {
public int label_id;
public float score;
public Result(int label_id, float score) {
this.label_id = label_id;
this.score = score;
}
}
public Classifier(String modelPath, String deviceName, int deviceId) {
handle = create(modelPath, deviceName, deviceId);
}
public Result[][] apply(Mat[] images) {
int[] counts = new int[images.length];
Result[] results = apply(handle, images, counts);
Result[][] rets = new Result[images.length][];
int offset = 0;
for (int i = 0; i < images.length; ++i) {
Result[] row = new Result[counts[i]];
if (counts[i] >= 0) {
System.arraycopy(results, offset, row, 0, counts[i]);
}
offset += counts[i];
rets[i] = row;
}
return rets;
}
public Result[] apply(Mat image) {
int[] counts = new int[1];
Mat[] images = new Mat[]{image};
return apply(handle, images, counts);
}
public void release() {
destroy(handle);
}
private native long create(String modelPath, String deviceName, int deviceId);
private native void destroy(long handle);
private native Result[] apply(long handle, Mat[] images, int[] count);
}

View File

@ -0,0 +1,13 @@
package mmdeploy;
public enum DataType {
FLOAT(0),
HALF(1),
INT8(2),
INT32(3);
final int value;
DataType(int value) {
this.value = value;
}
}

View File

@ -0,0 +1,58 @@
package mmdeploy;
public class Detector {
static {
System.loadLibrary("mmdeploy_java");
}
private final long handle;
public static class Result {
public int label_id;
public float score;
public Rect bbox;
public InstanceMask mask;
public Result(int label_id, float score, Rect bbox, InstanceMask mask) {
this.label_id = label_id;
this.score = score;
this.bbox = bbox;
this.mask = mask;
}
}
public Detector(String modelPath, String deviceName, int deviceId) {
handle = create(modelPath, deviceName, deviceId);
}
public Result[][] apply(Mat[] images) {
int[] counts = new int[images.length];
Result[] results = apply(handle, images, counts);
Result[][] rets = new Result[images.length][];
int offset = 0;
for (int i = 0; i < images.length; ++i) {
Result[] row = new Result[counts[i]];
if (counts[i] >= 0) {
System.arraycopy(results, offset, row, 0, counts[i]);
}
offset += counts[i];
rets[i] = row;
}
return rets;
}
public Result[] apply(Mat image) {
int[] counts = new int[1];
Mat[] images = new Mat[]{image};
return apply(handle, images, counts);
}
public void release() {
destroy(handle);
}
private native long create(String modelPath, String deviceName, int deviceId);
private native void destroy(long handle);
private native Result[] apply(long handle, Mat[] images, int[] count);
}

View File

@ -0,0 +1,12 @@
package mmdeploy;
public class InstanceMask {
public int[] shape;
public char[] data;
public InstanceMask(int height, int width, char[] data) {
shape = new int[]{height, width};
this.data = data;
}
}

View File

@ -0,0 +1,17 @@
package mmdeploy;
public class Mat {
public int[] shape;
public int format;
public int type;
public byte[] data;
public Mat(int height, int width, int channel,
PixelFormat format, DataType type, byte[] data) {
shape = new int[]{height, width, channel};
this.format = format.value;
this.type = type.value;
this.data = data;
}
}

View File

@ -0,0 +1,15 @@
package mmdeploy;
public enum PixelFormat {
BGR(0),
RGB(1),
GRAYSCALE(2),
NV12(3),
NV21(4),
BGRA(5);
final int value;
PixelFormat(int value) {
this.value = value;
}
}

View File

@ -0,0 +1,12 @@
package mmdeploy;
public class PointF {
public float x;
public float y;
public PointF(float x, float y) {
this.x = x;
this.y = y;
}
}

View File

@ -0,0 +1,50 @@
package mmdeploy;
public class PoseDetector {
static {
System.loadLibrary("mmdeploy_java");
}
private final long handle;
public static class Result {
public PointF[] point;
public float[] score;
public Result(PointF[] point, float [] score) {
this.point = point;
this.score = score;
}
}
public PoseDetector(String modelPath, String deviceName, int deviceId) {
handle = create(modelPath, deviceName, deviceId);
}
public Result[][] apply(Mat[] images) {
Result[] results = apply(handle, images);
Result[][] rets = new Result[images.length][];
int offset = 0;
for (int i = 0; i < images.length; ++i) {
Result[] row = new Result[1];
System.arraycopy(results, offset, row, 0, 1);
offset += 1;
rets[i] = row;
}
return rets;
}
public Result[] apply(Mat image) {
Mat[] images = new Mat[]{image};
return apply(handle, images);
}
public void release() {
destroy(handle);
}
private native long create(String modelPath, String deviceName, int deviceId);
private native void destroy(long handle);
private native Result[] apply(long handle, Mat[] images);
}

View File

@ -0,0 +1,16 @@
package mmdeploy;
public class Rect {
public float left;
public float top;
public float right;
public float bottom;
public Rect(float left, float top, float right, float bottom) {
this.left = left;
this.top = top;
this.right = right;
this.bottom = bottom;
}
}

View File

@ -0,0 +1,48 @@
package mmdeploy;
public class Restorer {
static {
System.loadLibrary("mmdeploy_java");
}
private final long handle;
public static class Result {
public Mat res;
public Result(Mat res) {
this.res = res;
}
}
public Restorer(String modelPath, String deviceName, int deviceId) {
handle = create(modelPath, deviceName, deviceId);
}
public Result[][] apply(Mat[] images) {
Result[] results = apply(handle, images);
Result[][] rets = new Result[images.length][];
int offset = 0;
for (int i = 0; i < images.length; ++i) {
Result[] row = new Result[1];
System.arraycopy(results, offset, row, 0, 1);
offset += 1;
rets[i] = row;
}
return rets;
}
public Result[] apply(Mat image) {
Mat[] images = new Mat[]{image};
return apply(handle, images);
}
public void release() {
destroy(handle);
}
private native long create(String modelPath, String deviceName, int deviceId);
private native void destroy(long handle);
private native Result[] apply(long handle, Mat[] images);
}

View File

@ -0,0 +1,54 @@
package mmdeploy;
public class Segmentor {
static {
System.loadLibrary("mmdeploy_java");
}
private final long handle;
public static class Result {
public int height;
public int width;
public int classes;
public int[] mask;
public Result(int height, int width, int classes, int [] mask) {
this.height = height;
this.width = width;
this.classes = classes;
this.mask = mask;
}
}
public Segmentor(String modelPath, String deviceName, int deviceId) {
handle = create(modelPath, deviceName, deviceId);
}
public Result[][] apply(Mat[] images) {
Result[] results = apply(handle, images);
Result[][] rets = new Result[images.length][];
int offset = 0;
for (int i = 0; i < images.length; ++i) {
Result[] row = new Result[1];
System.arraycopy(results, offset, row, 0, 1);
offset += 1;
rets[i] = row;
}
return rets;
}
public Result[] apply(Mat image) {
Mat[] images = new Mat[]{image};
return apply(handle, images);
}
public void release() {
destroy(handle);
}
private native long create(String modelPath, String deviceName, int deviceId);
private native void destroy(long handle);
private native Result[] apply(long handle, Mat[] images);
}

View File

@ -0,0 +1,54 @@
package mmdeploy;
public class TextDetector {
static {
System.loadLibrary("mmdeploy_java");
}
private final long handle;
public static class Result {
public PointF[] bbox;
public float score;
public Result(PointF[] bbox, float score) {
this.bbox = bbox;
this.score = score;
}
}
public TextDetector(String modelPath, String deviceName, int deviceId) {
handle = create(modelPath, deviceName, deviceId);
}
public Result[][] apply(Mat[] images) {
int[] counts = new int[images.length];
Result[] results = apply(handle, images, counts);
Result[][] rets = new Result[images.length][];
int offset = 0;
for (int i = 0; i < images.length; ++i) {
Result[] row = new Result[counts[i]];
if (counts[i] >= 0) {
System.arraycopy(results, offset, row, 0, counts[i]);
}
offset += counts[i];
rets[i] = row;
}
return rets;
}
public Result[] apply(Mat image) {
int[] counts = new int[1];
Mat[] images = new Mat[]{image};
return apply(handle, images, counts);
}
public void release() {
destroy(handle);
}
private native long create(String modelPath, String deviceName, int deviceId);
private native void destroy(long handle);
private native Result[] apply(long handle, Mat[] images, int[] count);
}

View File

@ -0,0 +1,57 @@
package mmdeploy;
public class TextRecognizer {
static {
System.loadLibrary("mmdeploy_java");
}
private final long handle;
public static class Result {
public byte [] text;
public float [] score;
public Result(byte [] text, float [] score) {
this.text = text;
this.score = score;
}
}
public TextRecognizer(String modelPath, String deviceName, int deviceId) {
handle = create(modelPath, deviceName, deviceId);
}
public Result[][] apply(Mat[] images) {
Result[] results = apply(handle, images);
Result[][] rets = new Result[images.length][];
int offset = 0;
for (int i = 0; i < images.length; ++i) {
Result[] row = new Result[1];
System.arraycopy(results, offset, row, 0, 1);
offset += 1;
rets[i] = row;
}
return rets;
}
public Result[] apply(Mat image) {
Mat[] images = new Mat[]{image};
return apply(handle, images);
}
public Result[] applyBbox(Mat image, TextDetector.Result[] bbox, int[] bbox_count) {
Mat[] images = new Mat[]{image};
return applyBbox(handle, images, bbox, bbox_count);
}
public void release() {
destroy(handle);
}
private native long create(String modelPath, String deviceName, int deviceId);
private native void destroy(long handle);
private native Result[] apply(long handle, Mat[] images);
private native Result[] applyBbox(long handle, Mat[] images, TextDetector.Result[] bbox, int[] bbox_count);
}

View File

@ -0,0 +1,28 @@
# Copyright (c) OpenMMLab. All rights reserved.
project(mmdeploy_java)
if (NOT ANDROID)
find_package(JNI REQUIRED)
else ()
set(JNI_LIBRARIES)
endif()
mmdeploy_add_library(${PROJECT_NAME} SHARED EXCLUDE
mmdeploy_Classifier.cpp
mmdeploy_Detector.cpp
mmdeploy_Segmentor.cpp
mmdeploy_Restorer.cpp
mmdeploy_PoseDetector.cpp
mmdeploy_TextDetector.cpp
mmdeploy_TextRecognizer.cpp)
target_include_directories(${PROJECT_NAME} PRIVATE
${JNI_INCLUDE_DIRS})
mmdeploy_load_static(${PROJECT_NAME} MMDeployStaticModules)
mmdeploy_load_dynamic(${PROJECT_NAME} MMDeployDynamicModules)
target_link_libraries(${PROJECT_NAME} PRIVATE
${JNI_LIBRARIES} MMDeployLibs)
install(TARGETS ${PROJECT_NAME}
DESTINATION lib)

Some files were not shown because too many files have changed in this diff Show More