- [Install Dependencies for Model Converter](#install-dependencies-for-model-converter)
- [Install Dependencies for SDK](#install-dependencies-for-sdk)
- [Install Inference Engines for MMDeploy](#install-inference-engines-for-mmdeploy)
- [Build MMDeploy](#build-mmdeploy)
- [Build Options Spec](#build-options-spec)
- [Build Model Converter](#build-model-converter)
- [Build Custom Ops](#build-custom-ops)
- [Install Model Converter](#install-model-converter)
- [Build SDK](#build-sdk)
- [Build Demo](#build-demo)
- [Note](#note)
---
Currently, MMDeploy only provides build-from-source method for windows platform. Prebuilt package will be released in the future.
## Build From Source
All the commands listed in the following chapters are verified on **Windows 10**.
### Install Toolchains
1. Download and install [Visual Studio 2019](https://visualstudio.microsoft.com)
2. Add the path of `cmake` to the environment variable `PATH`, i.e., "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin"
3. Install cuda toolkit if NVIDIA gpu is available. You can refer to the official [guide](https://developer.nvidia.com/cuda-downloads).
### Install Dependencies
#### Install Dependencies for Model Converter
<tableclass="docutils">
<thead>
<tr>
<th>NAME </th>
<th>INSTALLATION </th>
</tr>
</thead>
<tbody>
<tr>
<td>conda </td>
<td> Please install conda according to the official <ahref="https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html">guide</a>. <br>
After installation, open <code>anaconda powershell prompt</code> under the Start Menu <b>as the administrator</b>, because: <br>
1.<b>All the commands listed in the following text are verified in anaconda powershell </b><br>
2.<b>As an administrator, you can install the thirdparty libraries to the system path so as to simplify MMDeploy build command</b><br>
Note: if you are familiar with how cmake works, you can also use <code>anaconda powershell prompt</code> as an ordinary user.
</td>
</tr>
<tr>
<td>PyTorch <br>(>=1.8.0) </td>
<td>
Install PyTorch>=1.8.0 by following the <ahref="https://pytorch.org/">official instructions</a>. Be sure the CUDA version PyTorch requires matches that in your host.
1. Find and download OpenCV 3+ for windows from <ahref="https://github.com/opencv/opencv/releases">here</a>.<br>
2. You can download the prebuilt package and install it to the target directory. Or you can build OpenCV from its source. <br>
3. Find where <code>OpenCVConfig.cmake</code> locates in the installation directory. And export its path to the environment variable <code>PATH</code> like this,
2. Download the windows prebuilt binary package from <ahref="https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1">here</a>. Extract it and export environment variables as below:
1. Login <ahref="https://www.nvidia.com/">NVIDIA</a> and download the TensorRT tar file that matches the CPU architecture and CUDA version you are using from <ahref="https://developer.nvidia.com/nvidia-tensorrt-download">here</a>. Follow the <ahref="https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar">guide</a> to install TensorRT. <br>
2. Here is an example of installing TensorRT 8.2 GA Update 2 for Windows x86_64 and CUDA 11.x that you can refer to. <br> First of all, click <ahref="https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.2.3.0/zip/TensorRT-8.2.3.0.Windows10.x86_64.cuda-11.4.cudnn8.2.zip">here</a> to download CUDA 11.x TensorRT 8.2.3.0 and then install it like below:
1. Download cuDNN that matches the CPU architecture, CUDA version and TensorRT version you are using from <ahref="https://developer.nvidia.com/rdp/cudnn-archive"> cuDNN Archive</a>. <br>
In the above TensorRT's installation example, it requires cudnn8.2. Thus, you can download <ahref="https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.2.1.32/11.3_06072021/cudnn-11.3-windows-x64-v8.2.1.32.zip">CUDA 11.x cuDNN 8.2</a><br>
2. Extract the zip file and set the environment variables
<td>switch to build MMDeploy SDK python package</td>
</tr>
<tr>
<td>MMDEPLOY_BUILD_TEST</td>
<td>{ON, OFF}</td>
<td>OFF</td>
<td>Switch to build MMDeploy SDK unittest cases</td>
</tr>
<tr>
<td>MMDEPLOY_TARGET_DEVICES</td>
<td>{"cpu", "cuda"}</td>
<td>cpu</td>
<td>Enable target device. You can enable more by
passing a semicolon separated list of device names to <code>MMDEPLOY_TARGET_DEVICES</code> variable, e.g. <code>-DMMDEPLOY_TARGET_DEVICES="cpu;cuda"</code></td>
<td>Enabling inference engine. <b>By default, no target inference engine is set, since it highly depends on the use case.</b> When more than one engine are specified, it has to be set with a semicolon separated list of inference backend names, e.g. <pre><code>-DMMDEPLOY_TARGET_BACKENDS="trt;ort;pplnn;ncnn;openvino"</code></pre>
After specifying the inference engine, it's package path has to be passed to cmake as follows, <br>
1.<b>trt</b>: TensorRT. <code>TENSORRT_DIR</code> and <code>CUDNN_DIR</code> are needed.
<pre><code>
-DTENSORRT_DIR=$env:TENSORRT_DIR
-DCUDNN_DIR=$env:CUDNN_DIR
</code></pre>
2.<b>ort</b>: ONNXRuntime. <code>ONNXRUNTIME_DIR</code> is needed.
<td>Enable codebase's postprocess modules. You can provide a semicolon separated list of codebase names to enable them. Or you can pass <code>all</code> to enable them all, i.e., <code>-DMMDEPLOY_CODEBASES=all</code></td>
</tr>
<tr>
<td>BUILD_SHARED_LIBS</td>
<td>{ON, OFF}</td>
<td>ON</td>
<td>Switch to build shared library or static library of MMDeploy SDK</td>
</tr>
</tbody>
</table>
#### Build Model Converter
##### Build Custom Ops
If one of inference engines among ONNXRuntime, TensorRT and ncnn is selected, you have to build the corresponding custom ops.
- **ONNXRuntime** Custom Ops
```powershell
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 -DMMDEPLOY_TARGET_BACKENDS="ort" -DONNXRUNTIME_DIR="$env:ONNXRUNTIME_DIR"
cmake --build . --config Release -- /m
- **TensorRT** Custom Ops
```powershell
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 -DMMDEPLOY_TARGET_BACKENDS="trt" -DTENSORRT_DIR="$env:TENSORRT_DIR" -DCUDNN_DIR="$env:CUDNN_DIR"
cmake --build . --config Release -- /m
- **ncnn** Custom Ops
TODO
##### Install Model Converter
```powershell
cd $env:MMDEPLOY_DIR
pip install -e .
```
**Note**
- Some dependencies are optional. Simply running `pip install -e .` will only install the minimum runtime requirements.
To use optional dependencies, install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -e .[optional]`).
Valid keys for the extras field are: `all`, `tests`, `build`, `optional`.
#### Build SDK
MMDeploy provides two recipes as shown below for building SDK with ONNXRuntime and TensorRT as inference engines respectively.
You can also activate other engines after the model.
- cpu + ONNXRuntime
```PowerShell
cd $env:MMDEPLOY_DIR
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
-DMMDEPLOY_BUILD_SDK=ON `
-DMMDEPLOY_TARGET_DEVICES="cpu" `
-DMMDEPLOY_TARGET_BACKENDS="ort" `
-DMMDEPLOY_CODEBASES="all" `
-DONNXRUNTIME_DIR="$env:ONNXRUNTIME_DIR"
cmake --build . --config Release -- /m
cmake --install . --config Release
```
- cuda + TensorRT
```PowerShell
cd $env:MMDEPLOY_DIR
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
1. Release / Debug libraries can not be mixed. If MMDeploy is built with Release mode, all its dependent thirdparty libraries have to be built in Release mode too and vice versa.