mmdeploy/csrc/README.md
lvhan028 36124f6205
Merge sdk (#251)
* check in cmake

* move backend_ops to csrc/backend_ops

* check in preprocess, model, some codebase and their c-apis

* check in CMakeLists.txt

* check in parts of test_csrc

* commit everything else

* add readme

* update core's BUILD_INTERFACE directory

* skip codespell on third_party

* update trt_net and ort_net's CMakeLists

* ignore clion's build directory

* check in pybind11

* add onnx.proto. Remove MMDeploy's dependency on ncnn's source code

* export MMDeployTargets only when MMDEPLOY_BUILD_SDK is ON

* remove useless message

* target include directory is wrong

* change target name from mmdeploy_ppl_net to mmdeploy_pplnn_net

* skip install directory

* update project's cmake

* remove useless code

* set CMAKE_BUILD_TYPE to Release by force if it isn't set by user

* update custom ops CMakeLists

* pass object target's source lists

* fix lint end-of-file

* fix lint: trailing whitespace

* fix codespell hook

* remove bicubic_interpolate to csrc/backend_ops/

* set MMDEPLOY_BUILD_SDK OFF

* change custom ops build command

* add spdlog installation command

* update docs on how to checkout pybind11

* move bicubic_interpolate to backend_ops/tensorrt directory

* remove useless code

* correct cmake

* fix typo

* fix typo

* fix install directory

* correct sdk's readme

* set cub dir when cuda version < 11.0

* change directory where clang-format will apply to

* fix build command

* add .clang-format

* change clang-format style from google to file

* reformat csrc/backend_ops

* format sdk's code

* turn off clang-format for some files

* add -Xcompiler=-fno-gnu-unique

* fix trt topk initialize

* check in config for sdk demo

* update cmake script and csrc's readme

* correct config's path

* add cuda include directory, otherwise compile failed in case of tensorrt8.2

* clang-format onnx2ncnn.cpp

Co-authored-by: zhangli <lzhang329@gmail.com>
Co-authored-by: grimoire <yaoqian@sensetime.com>
2021-12-07 10:57:55 +08:00

4.7 KiB

Introduction

Installation

Dependencies

MMDeploy requires a compiler supporting C++17, e.g. GCC 7+, and CMake 3.14+ to build. Currently, it's tested on Linux x86-64, more platforms will be added in the future. The following packages are required to build MMDeploy SDK

  • OpenCV 3+
  • spdlog 0.16+

Make sure they can be found by find_package in cmake. If they are not installed by your OS's package manager, you probably need to pass their locations via CMAKE_PREFIX_PATH or as *_DIR variable.

On Ubuntu 16.04, please use the following command to install spdlog instead of apt-get install libspdlog-dev

wget http://archive.ubuntu.com/ubuntu/pool/universe/s/spdlog/libspdlog-dev_0.16.3-1_amd64.deb
sudo dpkg -i libspdlog-dev_0.16.3-1_amd64.deb

Enabling devices

By default, only CPU device is included in the target devices. You can enable device support for other devices by passing a semicolon separated list of device names to MMDEPLOY_TARGET_DEVICES variable, e.g. "cpu;cuda". Currently, the following devices are supported.

device name path setter
Host cpu N/A
CUDA cuda CUDA_TOOLKIT_ROOT_DIR

If you have multiple CUDA versions installed on your system, you will need to pass CUDA_TOOLKIT_ROOT_DIR to cmake to specify the version.

Enabling inference engines

By default, no target inference engines are set, since it's highly dependent on the use case. MMDEPLOY_TARGET_BACKENDS must be set to a semicolon separated list of inference engine names. A path to the inference engine library is also needed. The following backends are currently supported

library name path setter
PPL.nn pplnn pplnn_DIR
ncnn ncnn ncnn_DIR
ONNXRuntime ort ONNXRUNTIME_DIR
TensorRT trt TENSORRT_DIR & CUDNN_DIR
OpenVINO openvino InferenceEngine_DIR

Put it all together

The following is a recipe for building MMDeploy SDK with CPU device and ONNXRuntime support

mkdir build && cd build
cmake \
    -DMMDEPLOY_BUILD_SDK=ON \
    -DCMAKE_BUILD_TYPE=Release \
    -DCMAKE_CXX_COMPILER=g++-7 \
    -DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \
    -Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \
    -DONNXRUNTIME_DIR=/path/to/onnxruntime \
    -DMMDEPLOY_TARGET_DEVICES=cpu \
    -DMMDEPLOY_TARGET_BACKENDS=ort \
    -DMMDEPLOY_CODEBASES=all
    ..
cmake --build . -- -j$(nproc) && cmake --install .

Getting Started

After building & installing, the installation folder should have the following structure

.
└── Release
    ├── example
    │   ├── CMakeLists.txt
    │   ├── image_classification.cpp
    │   └── object_detection.cpp
    ├── include
    │   ├── c
    │   │   ├── classifier.h
    │   │   ├── common.h
    │   │   ├── detector.h
    │   │   ├── restorer.h
    │   │   ├── segmentor.h
    │   │   ├── text_detector.h
    │   │   └── text_recognizer.h
    │   └── cpp
    │       ├── archive
    │       ├── core
    │       └── experimental
    └── lib

where include/c and include/cpp correspond to C and C++ API respectively.

Caution: The C++ API is highly volatile and not recommended at the moment.

In the example directory, there are 2 examples involving classification and object detection. The examples are tested with ONNXRuntime on CPU. More examples on more devices/backends will come once our cmake packaging code is ready.

To start with, put the corresponding ONNX model file exported for ONNXRuntime in demo/config/resnet50_ort and demo/config/retinanet_ort. The models should be renamed as end2end.onnx to match the configs. The models can be exported using MMDeploy or corresponding OpenMMLab codebases. This can be done automatically when the model conversion to SDK model packaging script is ready in the future.

Here is a recipe for building & running the examples

cd build/install/example

# path to onnxruntime ** libraries **
export LD_LIBRARY_PATH=/path/to/onnxruntime/lib

mkdir build && cd build
cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \
      -DMMDeploy_DIR=${DMMDeploy_SOURCE_ROOT_DIR}/build/install/lib/cmake/MMDeploy ..
cmake --build .

# suppress verbose logs
export SPDLOG_LEVEL=warn

# running the image classification example
./image_classification ../config/resnet50_ort ${path/to/an/image}

# running the object detection example
./object_detection ../config/retinanet_ort ${path/to/an/image}