mmdeploy/docs/en/backends/onnxruntime.md
Yue Zhou 01a44c00c9
Add roi_align_rotated op for onnxruntime (#277)
* init

* add doc

* add

* Update test_ops.py

* fix bug

* fix pose demo and windows build (#307)

* add postprocessing_masks gpu version (#276)

* add postprocessing_masks gpu version

* default device cpu

* pre-commit fix

Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt>

* fixed a bug causes text-recognizer to fail when (non-NULL) empty bboxes list is passed (#310)

* [Fix] include missing <type_traits> for formatter.h (#313)

* fix formatter

* relax GCC version requirement

* fix lint

* Update onnxruntime.md

* fix lint

Co-authored-by: Chen Xin <xinchen.tju@gmail.com>
Co-authored-by: Shengxi Li <982783556@qq.com>
Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt>
Co-authored-by: lzhangzz <lzhang329@gmail.com>
2022-04-26 17:46:28 +08:00

3.5 KiB

ONNX Runtime Support

Introduction of ONNX Runtime

ONNX Runtime is a cross-platform inferencing and training accelerator compatible with many popular ML/DNN frameworks. Check its github for more information.

Installation

Please note that only onnxruntime>=1.8.1 of CPU version on Linux platform is supported by now.

  • Install ONNX Runtime python package
pip install onnxruntime==1.8.1

Build custom ops

Prerequisite

  • Download onnxruntime-linux from ONNX Runtime releases, extract it, expose ONNXRUNTIME_DIR and finally add the lib path to LD_LIBRARY_PATH as below:
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz

tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
cd onnxruntime-linux-x64-1.8.1
export ONNXRUNTIME_DIR=$(pwd)
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH

Note:

  • If you want to save onnxruntime env variables to bashrc, you could run

    echo '# set env for onnxruntime' >> ~/.bashrc
    echo "export ONNXRUNTIME_DIR=${ONNXRUNTIME_DIR}" >> ~/.bashrc
    echo 'export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH' >> ~/.bashrc
    source ~/.bashrc
    

Build on Linux

cd ${MMDEPLOY_DIR} # To MMDeploy root directory
mkdir -p build && cd build
cmake -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
make -j$(nproc)

How to convert a model

List of supported custom ops

Operator CPU GPU MMDeploy Releases
grid_sampler Y N master
MMCVModulatedDeformConv2d Y N master
NMSRotated Y N master
RoIAlignRotated Y N master

How to add a new custom op

Reminder

  • The custom operator is not included in supported operator list in ONNX Runtime.
  • The custom operator should be able to be exported to ONNX.

Main procedures

Take custom operator roi_align for example.

  1. Create a roi_align directory in ONNX Runtime source directory ${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/
  2. Add header and source file into roi_align directory ${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/roi_align/
  3. Add unit test into tests/test_ops/test_ops.py Check here for examples.

Finally, welcome to send us PR of adding custom operators for ONNX Runtime in MMDeploy. 🤓

FAQs

  • None

References