# Build for Linux-x86_64 - [Build for Linux-x86_64](#build-for-linux-x86_64) - [Install Toolchains](#install-toolchains) - [Install Dependencies](#install-dependencies) - [Install Dependencies for Model Converter](#install-dependencies-for-model-converter) - [Install Dependencies for SDK](#install-dependencies-for-sdk) - [Install Inference Engines for MMDeploy](#install-inference-engines-for-mmdeploy) - [Build MMDeploy](#build-mmdeploy) - [Build Options Spec](#build-options-spec) - [Build Model Converter](#build-model-converter) - [Build Custom Ops](#build-custom-ops) - [Install Model Converter](#install-model-converter) - [Build SDK and Demo](#build-sdk-and-demo) ______________________________________________________________________ ## Install Toolchains - cmake **Make sure cmake version >= 3.14.0**. The below script shows how to install cmake 3.20.0. You can find more versions [here](https://cmake.org/install). ```bash wget https://github.com/Kitware/CMake/releases/download/v3.20.0/cmake-3.20.0-linux-x86_64.tar.gz tar -xzvf cmake-3.20.0-linux-x86_64.tar.gz sudo ln -sf $(pwd)/cmake-3.20.0-linux-x86_64/bin/* /usr/bin/ ``` - GCC 7+ MMDeploy requires compilers that support C++17. ```bash # Add repository if ubuntu < 18.04 sudo add-apt-repository ppa:ubuntu-toolchain-r/test sudo apt-get update sudo apt-get install gcc-7 sudo apt-get install g++-7 ``` ## Install Dependencies ### Install Dependencies for Model Converter
NAME | INSTALLATION |
---|---|
conda | Please install conda according to the official guide. Create a conda virtual environment and activate it.
|
PyTorch (>=1.8.0) |
Install PyTorch>=1.8.0 by following the official instructions. Be sure the CUDA version PyTorch requires matches that in your host.
|
mmcv-full | Install mmcv-full as follows. Refer to the guide for details.
|
NAME | INSTALLATION |
---|---|
OpenCV (>=3.0) |
On Ubuntu >=18.04,
On Ubuntu 16.04, OpenCV has to be built from the source code. Please refer to the guide.
|
pplcv | A high-performance image processing library of openPPL. It is optional which only be needed if cuda platform is required.
|
NAME | PACKAGE | INSTALLATION |
---|---|---|
ONNXRuntime | onnxruntime (>=1.8.1) |
1. Install python package
2. Download the linux prebuilt binary package from here. Extract it and export environment variables as below:
|
TensorRT |
TensorRT |
1. Login NVIDIA and download the TensorRT tar file that matches the CPU architecture and CUDA version you are using from here. Follow the guide to install TensorRT. 2. Here is an example of installing TensorRT 8.2 GA Update 2 for Linux x86_64 and CUDA 11.x that you can refer to. First of all, click here to download CUDA 11.x TensorRT 8.2.3.0 and then install it and other dependency like below:
|
cuDNN |
1. Download cuDNN that matches the CPU architecture, CUDA version and TensorRT version you are using from cuDNN Archive. In the above TensorRT's installation example, it requires cudnn8.2. Thus, you can download CUDA 11.x cuDNN 8.2 2. Extract the compressed file and set the environment variables
|
|
PPL.NN | ppl.nn |
1. Please follow the guide to build ppl.nn and install pyppl .2. Export pplnn's root path to environment variable
|
OpenVINO | openvino | 1. Install OpenVINO package
2. Optional. If you want to use OpenVINO in MMDeploy SDK, please install and configure it by following the guide.
|
ncnn | ncnn | 1. Download and build ncnn according to its wiki.
Make sure to enable -DNCNN_PYTHON=ON in your build command. 2. Export ncnn's root path to environment variable
3. Install pyncnn
|
TorchScript | libtorch |
1. Download libtorch from here. Please note that only Pre-cxx11 ABI and version 1.8.1+ on Linux platform are supported by now. For previous versions of libtorch, you can find them in the issue comment. 2. Take Libtorch1.8.1+cu111 as an example. You can install it like this:
|
~/.bashrc
. Take the ONNXRuntime for example,
```bash
echo '# set env for onnxruntime' >> ~/.bashrc
echo "export ONNXRUNTIME_DIR=${ONNXRUNTIME_DIR}" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH" >> ~/.bashrc
source ~/.bashrc
```
## Build MMDeploy
```bash
cd /the/root/path/of/MMDeploy
export MMDEPLOY_DIR=$(pwd)
```
### Build Options Spec
NAME | VALUE | DEFAULT | REMARK |
---|---|---|---|
MMDEPLOY_BUILD_SDK | {ON, OFF} | OFF | Switch to build MMDeploy SDK |
MMDEPLOY_BUILD_SDK_PYTHON_API | {ON, OFF} | OFF | switch to build MMDeploy SDK python package |
MMDEPLOY_BUILD_SDK_JAVA_API | {ON, OFF} | switch to build MMDeploy SDK Java API | |
MMDEPLOY_BUILD_TEST | {ON, OFF} | OFF | Switch to build MMDeploy SDK unittest cases |
MMDEPLOY_TARGET_DEVICES | {"cpu", "cuda"} | cpu | Enable target device. You can enable more by
passing a semicolon separated list of device names to MMDEPLOY_TARGET_DEVICES variable, e.g. -DMMDEPLOY_TARGET_DEVICES="cpu;cuda" |
MMDEPLOY_TARGET_BACKENDS | {"trt", "ort", "pplnn", "ncnn", "openvino", "torchscript"} | N/A | Enabling inference engine. By default, no target inference engine is set, since it highly depends on the use case. When more than one engine are specified, it has to be set with a semicolon separated list of inference backend names, e.g.
After specifying the inference engine, it's package path has to be passed to cmake as follows, 1. trt: TensorRT. TENSORRT_DIR and CUDNN_DIR are needed.
2. ort: ONNXRuntime. ONNXRUNTIME_DIR is needed.
3. pplnn: PPL.NN. pplnn_DIR is needed.
4. ncnn: ncnn. ncnn_DIR is needed.
5. openvino: OpenVINO. InferenceEngine_DIR is needed.
6. torchscript: TorchScript. Torch_DIR is needed.
Currently, The Model Converter supports torchscript, but SDK doesn't.
|
MMDEPLOY_CODEBASES | {"mmcls", "mmdet", "mmseg", "mmedit", "mmocr", "all"} | all | Enable codebase's postprocess modules. You can provide a semicolon separated list of codebase names to enable them, e.g., -DMMDEPLOY_CODEBASES="mmcls;mmdet" . Or you can pass all to enable them all, i.e., -DMMDEPLOY_CODEBASES=all |
MMDEPLOY_SHARED_LIBS | {ON, OFF} | ON | Switch to build shared library or static library of MMDeploy SDK |