mirror of
https://github.com/PaddlePaddle/PaddleOCR.git
synced 2025-06-03 21:53:39 +08:00
* update inference scripts * update load_config * update * extend to all inference scripts * update error message * fix * to decouple the cpu_threads setting from mkldnn option and fix the deault action for mkldnn * update --enable_mkldnn default value * update enable_mkldnn option * rerun CI
English | 简体中文
PP-OCR Deployment
Paddle Deployment Introduction
Paddle provides a variety of deployment schemes to meet the deployment requirements of different scenarios. Please choose according to the actual situation:
PP-OCR Deployment
PP-OCR has supported multi deployment schemes. Click the link to get the specific tutorial.
- Python Inference
- C++ Inference
- Serving (Python/C++)
- Paddle-Lite (ARM CPU/OpenCL ARM GPU)
- Paddle2ONNX
If you need the deployment tutorial of academic algorithm models other than PP-OCR, please directly enter the main page of corresponding algorithms, entrance。