# Build for Windows - [Build for Windows](#build-for-windows) - [Build From Source](#build-from-source) - [Install Toolchains](#install-toolchains) - [Install Dependencies](#install-dependencies) - [Install Dependencies for Model Converter](#install-dependencies-for-model-converter) - [Install Dependencies for SDK](#install-dependencies-for-sdk) - [Install Inference Engines for MMDeploy](#install-inference-engines-for-mmdeploy) - [Build MMDeploy](#build-mmdeploy) - [Build Options Spec](#build-options-spec) - [Build Model Converter](#build-model-converter) - [Build Custom Ops](#build-custom-ops) - [Install Model Converter](#install-model-converter) - [Build SDK and Demos](#build-sdk-and-demos) - [Note](#note) ______________________________________________________________________ ## Build From Source All the commands listed in the following chapters are verified on **Windows 10**. ### Install Toolchains 1. Download and install [Visual Studio 2019](https://visualstudio.microsoft.com) 2. Add the path of `cmake` to the environment variable `PATH`, i.e., "C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\Common7\\IDE\\CommonExtensions\\Microsoft\\CMake\\CMake\\bin" 3. Install cuda toolkit if NVIDIA gpu is available. You can refer to the official [guide](https://developer.nvidia.com/cuda-downloads). ### Install Dependencies #### Install Dependencies for Model Converter
NAME | INSTALLATION |
---|---|
conda | Please install conda according to the official guide. After installation, open anaconda powershell prompt under the Start Menu as the administrator, because: 1. All the commands listed in the following text are verified in anaconda powershell 2. As an administrator, you can install the thirdparty libraries to the system path so as to simplify MMDeploy build command Note: if you are familiar with how cmake works, you can also use anaconda powershell prompt as an ordinary user.
|
PyTorch (>=1.8.0) |
Install PyTorch>=1.8.0 by following the official instructions. Be sure the CUDA version PyTorch requires matches that in your host.
|
mmcv-full | Install mmcv-full as follows. Refer to the guide for details.
|
NAME | INSTALLATION |
---|---|
OpenCV (>=3.0) |
1. Find and download OpenCV 3+ for windows from here. 2. You can download the prebuilt package and install it to the target directory. Or you can build OpenCV from its source. 3. Find where OpenCVConfig.cmake locates in the installation directory. And export its path to the environment variable PATH like this,
|
pplcv | A high-performance image processing library of openPPL. It is optional which only be needed if cuda platform is required.
|
NAME | PACKAGE | INSTALLATION |
---|---|---|
ONNXRuntime | onnxruntime (>=1.8.1) |
1. Install python package
2. Download the windows prebuilt binary package from here. Extract it and export environment variables as below:
|
TensorRT |
TensorRT |
1. Login NVIDIA and download the TensorRT tar file that matches the CPU architecture and CUDA version you are using from here. Follow the guide to install TensorRT. 2. Here is an example of installing TensorRT 8.2 GA Update 2 for Windows x86_64 and CUDA 11.x that you can refer to. First of all, click here to download CUDA 11.x TensorRT 8.2.3.0 and then install it and other dependency like below:
|
cuDNN |
1. Download cuDNN that matches the CPU architecture, CUDA version and TensorRT version you are using from cuDNN Archive. In the above TensorRT's installation example, it requires cudnn8.2. Thus, you can download CUDA 11.x cuDNN 8.2 2. Extract the zip file and set the environment variables
|
|
PPL.NN | ppl.nn | TODO |
OpenVINO | openvino | TODO |
ncnn | ncnn | TODO |
NAME | VALUE | DEFAULT | REMARK |
---|---|---|---|
MMDEPLOY_BUILD_SDK | {ON, OFF} | OFF | Switch to build MMDeploy SDK |
MMDEPLOY_BUILD_SDK_PYTHON_API | {ON, OFF} | OFF | switch to build MMDeploy SDK python package |
MMDEPLOY_BUILD_TEST | {ON, OFF} | OFF | Switch to build MMDeploy SDK unittest cases |
MMDEPLOY_TARGET_DEVICES | {"cpu", "cuda"} | cpu | Enable target device. You can enable more by
passing a semicolon separated list of device names to MMDEPLOY_TARGET_DEVICES variable, e.g. -DMMDEPLOY_TARGET_DEVICES="cpu;cuda" |
MMDEPLOY_TARGET_BACKENDS | {"trt", "ort", "pplnn", "ncnn", "openvino"} | N/A | Enabling inference engine. By default, no target inference engine is set, since it highly depends on the use case. When more than one engine are specified, it has to be set with a semicolon separated list of inference backend names, e.g.
After specifying the inference engine, it's package path has to be passed to cmake as follows, 1. trt: TensorRT. TENSORRT_DIR and CUDNN_DIR are needed.
2. ort: ONNXRuntime. ONNXRUNTIME_DIR is needed.
3. pplnn: PPL.NN. pplnn_DIR is needed. MMDeploy hasn't verified it yet.
4. ncnn: ncnn. ncnn_DIR is needed. MMDeploy hasn't verified it yet.
5. openvino: OpenVINO. InferenceEngine_DIR is needed. MMDeploy hasn't verified it yet.
|
MMDEPLOY_CODEBASES | {"mmcls", "mmdet", "mmseg", "mmedit", "mmocr", "all"} | all | Enable codebase's postprocess modules. You can provide a semicolon separated list of codebase names to enable them. Or you can pass all to enable them all, i.e., -DMMDEPLOY_CODEBASES=all |
MMDEPLOY_SHARED_LIBS | {ON, OFF} | ON | Switch to build shared library or static library of MMDeploy SDK |