From 2376b03ede572002f765f8eef23d8eea2fc0a694 Mon Sep 17 00:00:00 2001
From: timminator <150205162+timminator@users.noreply.github.com>
Date: Sun, 9 Feb 2025 12:34:09 +0100
Subject: [PATCH] Add english docu for CMAKE compilation on windows (#14641)
Add english docu for CMAKE compilation on windows
Add english docu for CMAKE compilation on windows
Add english docu for CMAKE compilation on windows
---
deploy/cpp_infer/docs/windows_vs2019_build.md | 2 +
deploy/cpp_infer/docs/windows_vs2022_build.md | 143 ++++++++++++++++++
deploy/cpp_infer/readme.md | 2 +-
.../infer_deploy/windows_vs2019_build.en.md | 143 +++++++++---------
4 files changed, 218 insertions(+), 72 deletions(-)
create mode 100644 deploy/cpp_infer/docs/windows_vs2022_build.md
diff --git a/deploy/cpp_infer/docs/windows_vs2019_build.md b/deploy/cpp_infer/docs/windows_vs2019_build.md
index 94205d9a0..5830c06d3 100644
--- a/deploy/cpp_infer/docs/windows_vs2019_build.md
+++ b/deploy/cpp_infer/docs/windows_vs2019_build.md
@@ -1,3 +1,5 @@
+[English](windows_vs2022_build.md) | 简体中文
+
- [Visual Studio 2019 Community CMake 编译指南](#visual-studio-2019-community-cmake-编译指南)
- [1. 环境准备](#1-环境准备)
- [1.1 安装必须环境](#11-安装必须环境)
diff --git a/deploy/cpp_infer/docs/windows_vs2022_build.md b/deploy/cpp_infer/docs/windows_vs2022_build.md
new file mode 100644
index 000000000..daaadb751
--- /dev/null
+++ b/deploy/cpp_infer/docs/windows_vs2022_build.md
@@ -0,0 +1,143 @@
+English | [简体中文](windows_vs2019_build.md)
+
+# Visual Studio 2022 Community CMake Compilation Guide
+
+PaddleOCR has been tested on Windows using `Visual Studio 2022 Community`. Microsoft started supporting direct `CMake` project management from `Visual Studio 2017`, but it wasn't fully stable and reliable until `2019`. If you want to use CMake for project management and compilation, we recommend using `Visual Studio 2022`.
+
+**All examples below assume the working directory is **`D:\projects\cpp`**.**
+
+## 1. Environment Preparation
+
+### 1.1 Install Required Dependencies
+
+- Visual Studio 2019 or newer
+- CUDA 10.2, cuDNN 7+ (only required for the GPU version of the prediction library). Additionally, the NVIDIA Computing Toolkit must be installed, and the NVIDIA cuDNN library must be downloaded.
+- CMake 3.22+
+
+Ensure that the above dependencies are installed before proceeding. In this tutorial the Community Edition of `VS2022` was used.
+
+### 1.2 Download PaddlePaddle C++ Prediction Library and OpenCV
+
+#### 1.2.1 Download PaddlePaddle C++ Prediction Library
+
+PaddlePaddle C++ prediction libraries offer different precompiled versions for various `CPU` and `CUDA` configurations. Download the appropriate version from: [C++ Prediction Library Download List](https://www.paddlepaddle.org.cn/inference/master/guides/install/download_lib.html#windows)
+
+After extraction, the `D:\projects\paddle_inference` directory should contain:
+
+```
+paddle_inference
+├── paddle # Core Paddle library and header files
+|
+├── third_party # Third-party dependencies and headers
+|
+└── version.txt # Version and compilation information
+```
+
+#### 1.2.2 Install and Configure OpenCV
+
+1. Download OpenCV for Windows from the [official release page](https://github.com/opencv/opencv/releases).
+2. Run the downloaded executable and extract OpenCV to a specified directory, e.g., `D:\projects\cpp\opencv`.
+
+#### 1.2.3 Download PaddleOCR Code
+
+```bash
+git clone https://github.com/PaddlePaddle/Paddle.git
+git checkout develop
+```
+
+## 2. Running the Project
+
+### Step 1: Create a Visual Studio Project
+
+Once CMake is installed, open the `cmake-gui` application. Specify the source code directory in the first input box and the build output directory in the second input box.
+
+
+
+### Step 2: Run CMake Configuration
+
+Click the `Configure` button at the bottom of the interface. The first time you run it, a prompt will appear asking for the Visual Studio configuration. Select your `Visual Studio` version and set the target platform to `x64`. Click `Finish` to start the configuration process.
+
+
+
+The first run will result in errors, which is expected. You now need to configure OpenCV and the prediction library.
+
+- **For CPU version**, configure the following variables:
+
+ - `OPENCV_DIR`: Path to the OpenCV `lib` folder
+ - `OpenCV_DIR`: Same as `OPENCV_DIR`
+ - `PADDLE_LIB`: Path to the `paddle_inference` folder
+
+- **For GPU version**, configure additional variables:
+
+ - `CUDA_LIB`: CUDA path, e.g., `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\x64`
+ - `CUDNN_LIB`: Path to extracted CuDNN library, e.g., `D:\CuDNN-8.9.7.29`
+ - `TENSORRT_DIR`: Path to extracted TensorRT, e.g., `D:\TensorRT-8.0.1.6`
+ - `WITH_GPU`: Check this option
+ - `WITH_TENSORRT`: Check this option
+
+Example configuration:
+
+
+
+Once configured, click `Configure` again.
+
+**Note:**
+
+1. If using `openblas`, uncheck `WITH_MKL`.
+2. If you encounter the error `unable to access 'https://github.com/LDOUBLEV/AutoLog.git/': gnutls_handshake() failed`, update `deploy/cpp_infer/external-cmake/auto-log.cmake` to use `https://gitee.com/Double_V/AutoLog`.
+
+### Step 3: Generate Visual Studio Project
+
+Click `Generate` to create the `.sln` file for the Visual Studio project.
+
+
+Click `Open Project` to launch the project in Visual Studio. The interface should look like this:
+
+
+Before building the solution, perform the following steps:
+
+1. Change `Debug` to `Release` mode.
+2. Download [dirent.h](https://paddleocr.bj.bcebos.com/deploy/cpp_infer/cpp_files/dirent.h) and copy it to the Visual Studio include directory, e.g., `C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\include`.
+
+Click `Build -> Build Solution`. Once completed, the `ppocr.exe` file should appear in the `build/Release/` folder.
+
+Before running, copy the following files to `build/Release/`:
+
+1. `paddle_inference/paddle/lib/paddle_inference.dll`
+2. `paddle_inference/paddle/lib/common.dll`
+3. `paddle_inference/third_party/install/mklml/lib/mklml.dll`
+4. `paddle_inference/third_party/install/mklml/lib/libiomp5md.dll`
+5. `paddle_inference/third_party/install/onednn/lib/mkldnn.dll`
+6. `opencv/build/x64/vc15/bin/opencv_world455.dll`
+7. If using the `openblas` version, also copy `paddle_inference/third_party/install/openblas/lib/openblas.dll`.
+
+### Step 4: Run the Prediction
+
+The compiled executable is located in the `build/Release/` directory. Open `cmd` and navigate to `D:\projects\cpp\PaddleOCR\deploy\cpp_infer\`:
+
+```
+cd /d D:\projects\cpp\PaddleOCR\deploy\cpp_infer
+```
+
+Run the prediction using `ppocr.exe`. For more usage details, refer to the [documentation](../readme_ch.md).
+
+```shell
+# Switch terminal encoding to UTF-8
+CHCP 65001
+
+# If using PowerShell, run this command before execution to fix character encoding issues:
+$OutputEncoding = [console]::InputEncoding = [console]::OutputEncoding = New-Object System.Text.UTF8Encoding
+
+# Execute prediction
+.\build\Release\ppocr.exe system --det_model_dir=D:\projects\cpp\ch_PP-OCRv2_det_slim_quant_infer --rec_model_dir=D:\projects\cpp\ch_PP-OCRv2_rec_slim_quant_infer --image_dir=D:\projects\cpp\PaddleOCR\doc\imgs\11.jpg
+```
+
+
+
+## Sample result:
+
+
+## FAQ
+
+- **Issue:** Application fails to start with error `(0xc0000142)` and `cmd` output shows `You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found.`
+- **Solution:** Copy all `.dll` files from the `TensorRT` directory's `lib` folder into the `release` directory and try running it again.
diff --git a/deploy/cpp_infer/readme.md b/deploy/cpp_infer/readme.md
index d4be0f945..64cc39da9 100644
--- a/deploy/cpp_infer/readme.md
+++ b/deploy/cpp_infer/readme.md
@@ -14,7 +14,7 @@ English | [简体中文](readme_ch.md)
This chapter introduces the C++ deployment steps of the PaddleOCR model. C++ is better than Python in terms of performance. Therefore, in CPU and GPU deployment scenarios, C++ deployment is mostly used.
-This section will introduce how to configure the C++ environment and deploy PaddleOCR in Linux (CPU\GPU) environment. For Windows deployment please refer to [Windows](./docs/windows_vs2019_build.md) compilation guidelines.
+This section will introduce how to configure the C++ environment and deploy PaddleOCR in Linux (CPU\GPU) environment. For Windows deployment please refer to [Windows](./docs/windows_vs2022_build.md) compilation guidelines.
diff --git a/docs/ppocr/infer_deploy/windows_vs2019_build.en.md b/docs/ppocr/infer_deploy/windows_vs2019_build.en.md
index 0f895ada3..6bb1c3841 100644
--- a/docs/ppocr/infer_deploy/windows_vs2019_build.en.md
+++ b/docs/ppocr/infer_deploy/windows_vs2019_build.en.md
@@ -2,143 +2,144 @@
comments: true
---
-# Visual Studio 2019 Community CMake Compilation Guide
+# Visual Studio 2022 Community CMake Compilation Guide
-PaddleOCR is tested on Windows based on `Visual Studio 2019 Community`. Microsoft has supported direct management of `CMake` cross-platform compilation projects since `Visual Studio 2017`, but it was not until `2019` that stable and complete support was provided, so if you want to use CMake to manage project compilation and build, we recommend that you use the `Visual Studio 2019` environment to build.
+PaddleOCR has been tested on Windows using `Visual Studio 2022 Community`. Microsoft started supporting direct `CMake` project management from `Visual Studio 2017`, but it wasn't fully stable and reliable until `2019`. If you want to use CMake for project management and compilation, we recommend using `Visual Studio 2022`.
-**All the examples below are demonstrated with the working directory as `D:\projects\cpp`**.
+**All examples below assume the working directory is **`D:\projects\cpp`**.**
## 1. Environment Preparation
-### 1.1 Install the required environment
+### 1.1 Install Required Dependencies
-- Visual Studio 2019
-- CUDA 10.2, cudnn 7+ (only required when using the GPU version of the prediction library)
+- Visual Studio 2019 or newer
+- CUDA 10.2, cuDNN 7+ (only required for the GPU version of the prediction library). Additionally, the NVIDIA Computing Toolkit must be installed, and the NVIDIA cuDNN library must be downloaded.
- CMake 3.22+
-Please make sure the system has the above basic software installed. We use the community version of `VS2019`.
+Ensure that the above dependencies are installed before proceeding. In this tutorial the Community Edition of `VS2022` was used.
-### 1.2 Download PaddlePaddle C++ prediction library and Opencv
+### 1.2 Download PaddlePaddle C++ Prediction Library and OpenCV
-#### 1.2.1 Download PaddlePaddle C++ prediction library
+#### 1.2.1 Download PaddlePaddle C++ Prediction Library
-PaddlePaddle C++ prediction library provides different precompiled versions for different `CPU` and `CUDA` versions. Please download according to the actual situation: [C++ prediction library download list](https://www.paddlepaddle.org.cn/inference/master/guides/install/download_lib.html#windows)
+PaddlePaddle C++ prediction libraries offer different precompiled versions for various `CPU` and `CUDA` configurations. Download the appropriate version from: [C++ Prediction Library Download List](https://www.paddlepaddle.org.cn/inference/master/guides/install/download_lib.html#windows)
-After decompression, the `D:\projects\paddle_inference` directory contains the following contents:
+After extraction, the `D:\projects\paddle_inference` directory should contain:
```
paddle_inference
-├── paddle # paddle core library and header files
+├── paddle # Core Paddle library and header files
|
-├── third_party # third-party dependent libraries and header files
+├── third_party # Third-party dependencies and headers
|
-└── version.txt # version and compilation information
+└── version.txt # Version and compilation information
```
-#### 1.2.2 Install and configure OpenCV
+#### 1.2.2 Install and Configure OpenCV
-1. Download Opencv for Windows platform from the OpenCV official website, [Download address](https://github.com/opencv/opencv/releases)
-2. Run the downloaded executable file and unzip OpenCV to the specified directory, such as `D:\projects\cpp\opencv`
+1. Download OpenCV for Windows from the [official release page](https://github.com/opencv/opencv/releases).
+2. Run the downloaded executable and extract OpenCV to a specified directory, e.g., `D:\projects\cpp\opencv`.
-#### 1.2.3 Download PaddleOCR code
+#### 1.2.3 Download PaddleOCR Code
-```bash linenums="1"
-git clone -b dygraph https://github.com/PaddlePaddle/PaddleOCR
+```bash
+git clone https://github.com/PaddlePaddle/Paddle.git
+git checkout develop
```
-## 2. Start running
+## 2. Running the Project
-### Step1: Build Visual Studio project
+### Step 1: Create a Visual Studio Project
-After cmake is installed, there will be a cmake-gui program in the system. Open cmake-gui, fill in the source code path in the first input box, and fill in the compilation output path in the second input box
+Once CMake is installed, open the `cmake-gui` application. Specify the source code directory in the first input box and the build output directory in the second input box.
-
+
-### Step2: Execute cmake configuration
+### Step 2: Run CMake Configuration
-Click the `Configure` button at the bottom of the interface. The first click will pop up a prompt box for Visual Studio configuration, as shown below. Select your Visual Studio version is fine, and the target platform is x64. Then click the `finish` button to start the automatic configuration.
+Click the `Configure` button at the bottom of the interface. The first time you run it, a prompt will appear asking for the Visual Studio configuration. Select your `Visual Studio` version and set the target platform to `x64`. Click `Finish` to start the configuration process.

-The first execution will report an error, which is normal. Next, configure Opencv and the prediction library
+The first run will result in errors, which is expected. You now need to configure OpenCV and the prediction library.
-- For cpu version, only the three parameters OPENCV_DIR, OpenCV_DIR, and PADDLE_LIB need to be considered
+- **For CPU version**, configure the following variables:
-- OPENCV_DIR: Fill in the location of the opencv lib folder
+ - `OPENCV_DIR`: Path to the OpenCV `lib` folder
+ - `OpenCV_DIR`: Same as `OPENCV_DIR`
+ - `PADDLE_LIB`: Path to the `paddle_inference` folder
-- OpenCV_DIR: Fill in the location of the opencv lib folder
+- **For GPU version**, configure additional variables:
-- PADDLE_LIB: The location of the paddle_inference folder
+ - `CUDA_LIB`: CUDA path, e.g., `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\x64`
+ - `CUDNN_LIB`: Path to extracted CuDNN library, e.g., `D:\CuDNN-8.9.7.29`
+ - `TENSORRT_DIR`: Path to extracted TensorRT, e.g., `D:\TensorRT-8.0.1.6`
+ - `WITH_GPU`: Check this option
+ - `WITH_TENSORRT`: Check this option
-- For GPU version, on the basis of the cpu version, the following variables need to be filled in
-CUDA_LIB, CUDNN_LIB, TENSORRT_DIR, WITH_GPU, WITH_TENSORRT
-
-- CUDA_LIB: CUDA address, such as `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\lib\x64`
-
-- CUDNN_LIB: The same as CUDA_LIB
-
-- TENSORRT_DIR: The location where TRT is unzipped after downloading, such as `D:\TensorRT-8.0.1.6`
-- WITH_GPU: Check
-- WITH_TENSORRT: Check
-
-The configured screenshot is as follows
+Example configuration:

-After the configuration is completed, click the `Configure` button again.
+Once configured, click `Configure` again.
**Note:**
-1. If you are using the `openblas` version, please uncheck `WITH_MKL`
-2. If you encounter the error `unable to access 'https://github.com/LDOUBLEV/AutoLog.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated.`, change the github address in `deploy/cpp_infer/external-cmake/auto-log.cmake` to address.
+1. If using `openblas`, uncheck `WITH_MKL`.
+2. If you encounter the error `unable to access 'https://github.com/LDOUBLEV/AutoLog.git/': gnutls_handshake() failed`, update `deploy/cpp_infer/external-cmake/auto-log.cmake` to use `https://gitee.com/Double_V/AutoLog`.
-### Step3: Generate Visual Studio Project
+### Step 3: Generate Visual Studio Project
-Click the `Generate` button to generate the sln file of the Visual Studio project.
+Click `Generate` to create the `.sln` file for the Visual Studio project.

-Click the `Open Project` button to open the project in Visual Studio. The screenshot after opening is as follows
-
+Click `Open Project` to launch the project in Visual Studio. The interface should look like this:

-Before starting to generate the solution, perform the following steps:
+Before building the solution, perform the following steps:
-1. Change `Debug` to `Release`
+1. Change `Debug` to `Release` mode.
+2. Download [dirent.h](https://paddleocr.bj.bcebos.com/deploy/cpp_infer/cpp_files/dirent.h) and copy it to the Visual Studio include directory, e.g., `C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\include`.
-2. Download [dirent.h](https://paddleocr.bj.bcebos.com/deploy/cpp_infer/cpp_files/dirent.h) and copy it to the include folder of Visual Studio, such as `C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include`.
+Click `Build -> Build Solution`. Once completed, the `ppocr.exe` file should appear in the `build/Release/` folder.
-Click `Build->Generate Solution`, and you can see the `ppocr.exe` file in the `build/Release/` folder.
-
-Before running, copy the following files to the `build/Release/` folder
+Before running, copy the following files to `build/Release/`:
1. `paddle_inference/paddle/lib/paddle_inference.dll`
+2. `paddle_inference/paddle/lib/common.dll`
+3. `paddle_inference/third_party/install/mklml/lib/mklml.dll`
+4. `paddle_inference/third_party/install/mklml/lib/libiomp5md.dll`
+5. `paddle_inference/third_party/install/onednn/lib/mkldnn.dll`
+6. `opencv/build/x64/vc15/bin/opencv_world455.dll`
+7. If using the `openblas` version, also copy `paddle_inference/third_party/install/openblas/lib/openblas.dll`.
-2. `paddle_inference/third_party/install/onnxruntime/lib/onnxruntime.dll`
+### Step 4: Run the Prediction
-3. `paddle_inference/third_party/install/paddle2onnx/lib/paddle2onnx.dll`
-
-4. `opencv/build/x64/vc15/bin/opencv_world455.dll`
-
-5. If you use the prediction library of the openblas version, you also need to copy `paddle_inference/third_party/install/openblas/lib/openblas.dll`
-
-### Step4: Prediction
-
-The above `Visual Studio The executable file compiled by 2019 is in the directory of build/Release/. Open cmd and switch to D:\projects\cpp\PaddleOCR\deploy\cpp_infer\:
+The compiled executable is located in the `build/Release/` directory. Open `cmd` and navigate to `D:\projects\cpp\PaddleOCR\deploy\cpp_infer\`:
+```
cd /d D:\projects\cpp\PaddleOCR\deploy\cpp_infer
+```
-The executable file ppocr.exe is the sample prediction program. Its main usage is as follows. For more usage, please refer to the [Instructions](./cpp_infer.en.md) section of running demo.
+Run the prediction using `ppocr.exe`. For more usage details, refer to the to the [Instructions](./cpp_infer.en.md) section of running the demo.
-```bash linenums="1"
-# Switch terminal encoding to utf8
+```shell
+# Switch terminal encoding to UTF-8
CHCP 65001
+
+# If using PowerShell, run this command before execution to fix character encoding issues:
+$OutputEncoding = [console]::InputEncoding = [console]::OutputEncoding = New-Object System.Text.UTF8Encoding
+
# Execute prediction
.\build\Release\ppocr.exe system --det_model_dir=D:\projects\cpp\ch_PP-OCRv2_det_slim_quant_infer --rec_model_dir=D:\projects\cpp\ch_PP-OCRv2_rec_slim_quant_infer --image_dir=D:\projects\cpp\PaddleOCR\doc\imgs\11.jpg
```
-The recognition result is as follows
+
+
+## Sample result:

## FAQ
-- When running, a pop-up window prompts `The application cannot be started normally (0xc0000142)`, and the `cmd` window prompts `You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found.`, copy all the dll files in the lib in the tensor directory to the release directory, and run it again.
+- **Issue:** Application fails to start with error `(0xc0000142)` and `cmd` output shows `You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found.`
+- **Solution:** Copy all `.dll` files from the `TensorRT` directory's `lib` folder into the `release` directory and try running it again.