mmdeploy/docs/en/05-supported-backends/ncnn.md

93 lines
2.5 KiB
Markdown
Raw Normal View History

# ncnn Support
MMDeploy now supports ncnn version == 1.0.20220216
2022-01-07 11:04:28 +08:00
## Installation
### Install ncnn
- Download VulkanTools for the compilation of ncnn.
2021-12-23 13:23:39 +08:00
```bash
wget https://sdk.lunarg.com/sdk/download/1.2.176.1/linux/vulkansdk-linux-x86_64-1.2.176.1.tar.gz?Human=true -O vulkansdk-linux-x86_64-1.2.176.1.tar.gz
tar -xf vulkansdk-linux-x86_64-1.2.176.1.tar.gz
export VULKAN_SDK=$(pwd)/1.2.176.1/x86_64
export LD_LIBRARY_PATH=$VULKAN_SDK/lib:$LD_LIBRARY_PATH
```
2021-12-23 13:23:39 +08:00
- Check your gcc version.
You should ensure your gcc satisfies `gcc >= 6`.
- Install Protocol Buffers through:
```bash
apt-get install libprotobuf-dev protobuf-compiler
```
- Prepare ncnn Framework
- Download ncnn source code
```bash
git clone -b 20220216 git@github.com:Tencent/ncnn.git
```
- <font color=red>Make install</font> ncnn library
```bash
cd ncnn
export NCNN_DIR=$(pwd)
git submodule update --init
mkdir -p build && cd build
cmake -DNCNN_VULKAN=ON -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON -DNCNN_PYTHON=ON -DNCNN_BUILD_TOOLS=ON -DNCNN_BUILD_BENCHMARK=ON -DNCNN_BUILD_TESTS=ON ..
make install
```
- Install pyncnn module
```bash
cd ${NCNN_DIR} # To ncnn root directory
cd python
pip install -e .
```
### Build custom ops
Some custom ops are created to support models in OpenMMLab, the custom ops can be built as follows:
```bash
cd ${MMDEPLOY_DIR}
2021-12-23 13:23:39 +08:00
mkdir -p build && cd build
Merge sdk (#251) * check in cmake * move backend_ops to csrc/backend_ops * check in preprocess, model, some codebase and their c-apis * check in CMakeLists.txt * check in parts of test_csrc * commit everything else * add readme * update core's BUILD_INTERFACE directory * skip codespell on third_party * update trt_net and ort_net's CMakeLists * ignore clion's build directory * check in pybind11 * add onnx.proto. Remove MMDeploy's dependency on ncnn's source code * export MMDeployTargets only when MMDEPLOY_BUILD_SDK is ON * remove useless message * target include directory is wrong * change target name from mmdeploy_ppl_net to mmdeploy_pplnn_net * skip install directory * update project's cmake * remove useless code * set CMAKE_BUILD_TYPE to Release by force if it isn't set by user * update custom ops CMakeLists * pass object target's source lists * fix lint end-of-file * fix lint: trailing whitespace * fix codespell hook * remove bicubic_interpolate to csrc/backend_ops/ * set MMDEPLOY_BUILD_SDK OFF * change custom ops build command * add spdlog installation command * update docs on how to checkout pybind11 * move bicubic_interpolate to backend_ops/tensorrt directory * remove useless code * correct cmake * fix typo * fix typo * fix install directory * correct sdk's readme * set cub dir when cuda version < 11.0 * change directory where clang-format will apply to * fix build command * add .clang-format * change clang-format style from google to file * reformat csrc/backend_ops * format sdk's code * turn off clang-format for some files * add -Xcompiler=-fno-gnu-unique * fix trt topk initialize * check in config for sdk demo * update cmake script and csrc's readme * correct config's path * add cuda include directory, otherwise compile failed in case of tensorrt8.2 * clang-format onnx2ncnn.cpp Co-authored-by: zhangli <lzhang329@gmail.com> Co-authored-by: grimoire <yaoqian@sensetime.com>
2021-12-07 10:57:55 +08:00
cmake -DMMDEPLOY_TARGET_BACKENDS=ncnn ..
make -j$(nproc)
```
If you haven't installed ncnn in the default path, please add `-Dncnn_DIR` flag in cmake.
```bash
2021-12-23 13:23:39 +08:00
cmake -DMMDEPLOY_TARGET_BACKENDS=ncnn -Dncnn_DIR=${NCNN_DIR}/build/install/lib/cmake/ncnn ..
make -j$(nproc)
```
## Convert model
- This follows the tutorial on [How to convert model](../02-how-to-run/convert_model.md).
- The converted model has two files: `.param` and `.bin`, as model structure file and weight file respectively.
## Reminder
- In ncnn version >= 1.0.20220216, the dimension of ncnn.Mat should be no more than 4.
## FAQs
1. When running ncnn models for inference with custom ops, it fails and shows the error message like:
```bash
TypeError: register mm custom layers(): incompatible function arguments. The following argument types are supported:
1.(ar0: ncnn:Net) -> int
Invoked with: <ncnn.ncnn.Net object at 0x7f7fc4038bb0>
```
This is because of the failure to bind ncnn C++ library to pyncnn. You should build pyncnn from C++ ncnn source code, but not by `pip install`