update backend docs (#319)

pull/1/head
RunningLeon 2021-12-23 13:23:39 +08:00 committed by GitHub
parent 9d4d52078b
commit f76ac64cfe
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 18 additions and 20 deletions

View File

@ -5,10 +5,14 @@
#### Install ncnn
- Download VulkanTools for the compilation of ncnn.
```bash
wget https://sdk.lunarg.com/sdk/download/1.2.176.1/linux/vulkansdk-linux-x86_64-1.2.176.1.tar.gz?Human=true -O vulkansdk-linux-x86_64-1.2.176.1.tar.gz
tar -xf vulkansdk-linux-x86_64-1.2.176.1.tar.gz
export VULKAN_SDK=$(pwd)/1.2.176.1/x86_64
export LD_LIBRARY_PATH=$VULKAN_SDK/lib:$LD_LIBRARY_PATH
```
- Check your gcc version.
You should ensure your gcc satisfies `gcc >= 6`.
@ -27,17 +31,18 @@ You should ensure your gcc satisfies `gcc >= 6`.
- <font color=red>Make install</font> ncnn library
```bash
cd ncnn
export NCNN_DIR=$(pwd)
git submodule update --init
mkdir build
cd build
mkdir -p build && cd build
cmake -DNCNN_VULKAN=ON -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON -DNCNN_PYTHON=ON -DNCNN_BUILD_TOOLS=ON -DNCNN_BUILD_BENCHMARK=ON -DNCNN_BUILD_TESTS=ON ..
make install
```
- Install pyncnn module
```bash
cd ncnn/python
pip install .
cd ${NCNN_DIR} # To NCNN root directory
cd python
pip install -e .
```
#### Build custom ops
@ -46,8 +51,7 @@ Some custom ops are created to support models in OpenMMLab, the custom ops can b
```bash
cd ${MMDEPLOY_DIR}
mkdir build
cd build
mkdir -p build && cd build
cmake -DMMDEPLOY_TARGET_BACKENDS=ncnn ..
make -j$(nproc)
```
@ -55,7 +59,7 @@ make -j$(nproc)
If you haven't installed NCNN in the default path, please add `-Dncnn_DIR` flag in cmake.
```bash
cmake -DMMDEPLOY_TARGET_BACKENDS=ncnn -Dncnn_DIR={path/of/ncnn}/build/install/lib/cmake/ncnn ..
cmake -DMMDEPLOY_TARGET_BACKENDS=ncnn -Dncnn_DIR=${NCNN_DIR}/build/install/lib/cmake/ncnn ..
make -j$(nproc)
```

View File

@ -44,10 +44,9 @@ Note:
```bash
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
mkdir build
cd build
mkdir -p build && cd build
cmake -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
make -j10
make -j$(nproc)
```
### How to convert a model
@ -73,8 +72,8 @@ make -j10
Take custom operator `roi_align` for example.
1. Create a `roi_align` directory in ONNX Runtime source directory `backend_ops/onnxruntime/`
2. Add header and source file into `roi_align` directory `backend_ops/onnxruntime/roi_align/`
1. Create a `roi_align` directory in ONNX Runtime source directory `${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/`
2. Add header and source file into `roi_align` directory `${MMDEPLOY_DIR}/csrc/backend_ops/onnxruntime/roi_align/`
3. Add unit test into `tests/test_ops/test_ops.py`
Check [here](../../../tests/test_ops/test_ops.py) for examples.

View File

@ -63,8 +63,8 @@ The table below lists the models that are guaranteed to be exportable to OpenVIN
| Faster R-CNN + DCN | `configs/dcn/faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py` | Y |
| VFNet | `configs/vfnet/vfnet_r50_fpn_1x_coco.py` | Y |
Notes:
- Custom operations from OpenVINO use the domain `org.openvinotoolkit`.
- For faster work in OpenVINO in the Faster-RCNN, Mask-RCNN, Cascade-RCNN, Cascade-Mask-RCNN models
the RoiAlign operation is replaced with the [ExperimentalDetectronROIFeatureExtractor](https://docs.openvinotoolkit.org/latest/openvino_docs_ops_detection_ExperimentalDetectronROIFeatureExtractor_6.html) operation in the ONNX graph.

View File

@ -25,9 +25,8 @@ Please install TensorRT 8 follow [install-guide](https://docs.nvidia.com/deeplea
Some custom ops are created to support models in OpenMMLab, and the custom ops can be built as follow:
```bash
cd ${MMDEPLOY_DIR}
mkdir build
cd build
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
mkdir -p build && cd build
cmake -DMMDEPLOY_TARGET_BACKENDS=trt ..
make -j$(nproc)
```
@ -73,10 +72,6 @@ test_pipeline = [
data = dict(
samples_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'train_annotations.json',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'val_annotations.json',