2023-06-25 15:02:45 +08:00
# onnxruntime Support
2021-10-09 14:10:42 +08:00
2022-06-07 18:05:51 +08:00
## Introduction of ONNX Runtime
2021-10-19 20:30:40 +08:00
2022-06-07 18:05:51 +08:00
**ONNX Runtime** is a cross-platform inference and training accelerator compatible with many popular ML/DNN frameworks. Check its [github ](https://github.com/microsoft/onnxruntime ) for more information.
2021-10-19 20:30:40 +08:00
2022-06-07 18:05:51 +08:00
## Installation
2021-10-19 20:30:40 +08:00
2023-06-14 16:16:25 +08:00
*Please note that only **onnxruntime>=1.8.1** of on Linux platform is supported by now.*
2021-10-19 20:30:40 +08:00
2023-06-14 16:16:25 +08:00
### Install ONNX Runtime python package
- CPU Version
```bash
pip install onnxruntime==1.8.1 # if you want to use cpu version
```
- GPU Version
2021-10-19 20:30:40 +08:00
```bash
2023-06-14 16:16:25 +08:00
pip install onnxruntime-gpu==1.8.1 # if you want to use gpu version
2021-10-19 20:30:40 +08:00
```
2022-06-07 18:05:51 +08:00
## Build custom ops
2021-10-19 20:30:40 +08:00
2023-06-14 16:16:25 +08:00
### Download ONNXRuntime Library
Download `onnxruntime-linux-*.tgz` library from ONNX Runtime [releases ](https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1 ), extract it, expose `ONNXRUNTIME_DIR` and finally add the lib path to `LD_LIBRARY_PATH` as below:
2021-10-19 20:30:40 +08:00
2023-06-14 16:16:25 +08:00
- CPU Version
2021-10-19 20:30:40 +08:00
```bash
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
cd onnxruntime-linux-x64-1.8.1
export ONNXRUNTIME_DIR=$(pwd)
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
```
2023-06-14 16:16:25 +08:00
- GPU Version
```bash
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-gpu-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-gpu-1.8.1.tgz
cd onnxruntime-linux-x64-gpu-1.8.1
export ONNXRUNTIME_DIR=$(pwd)
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH
```
2022-06-07 18:05:51 +08:00
### Build on Linux
2021-10-19 20:30:40 +08:00
2023-06-14 16:16:25 +08:00
- CPU Version
```bash
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
mkdir -p build & & cd build
cmake -DMMDEPLOY_TARGET_DEVICES='cpu' -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
make -j$(nproc) & & make install
```
- GPU Version
2021-10-19 20:30:40 +08:00
```bash
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
2021-12-23 13:23:39 +08:00
mkdir -p build & & cd build
2023-06-14 16:16:25 +08:00
cmake -DMMDEPLOY_TARGET_DEVICES='cuda' -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
2022-09-18 13:36:28 +08:00
make -j$(nproc) & & make install
2021-10-19 20:30:40 +08:00
```
2022-06-07 18:05:51 +08:00
## How to convert a model
2021-10-19 20:30:40 +08:00
2022-06-07 18:05:51 +08:00
- You could follow the instructions of tutorial [How to convert model ](../02-how-to-run/convert_model.md )
2021-10-19 20:30:40 +08:00
2022-06-07 18:05:51 +08:00
## How to add a new custom op
2021-10-19 20:30:40 +08:00
2022-06-07 18:05:51 +08:00
## Reminder
2021-10-19 20:30:40 +08:00
- The custom operator is not included in [supported operator list ](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md ) in ONNX Runtime.
- The custom operator should be able to be exported to ONNX.
#### Main procedures
Take custom operator `roi_align` for example.
2021-12-23 13:23:39 +08:00
1. Create a `roi_align` directory in ONNX Runtime source directory `${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/`
2. Add header and source file into `roi_align` directory `${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/roi_align/`
2021-10-19 20:30:40 +08:00
3. Add unit test into `tests/test_ops/test_ops.py`
2021-12-21 13:59:07 +08:00
Check [here ](../../../tests/test_ops/test_ops.py ) for examples.
2021-10-19 20:30:40 +08:00
**Finally, welcome to send us PR of adding custom operators for ONNX Runtime in MMDeploy.** :nerd_face:
2022-06-07 18:05:51 +08:00
## References
2021-10-19 20:30:40 +08:00
- [How to export Pytorch model with custom op to ONNX and run it in ONNX Runtime ](https://github.com/onnx/tutorials/blob/master/PyTorchCustomOperator/README.md )
2022-12-08 16:13:51 +08:00
- [How to add a custom operator/kernel in ONNX Runtime ](https://onnxruntime.ai/docs/reference/operators/add-custom-op.html )