* update inference scripts * update load_config * update * extend to all inference scripts * update error message * fix * to decouple the cpu_threads setting from mkldnn option and fix the deault action for mkldnn * update --enable_mkldnn default value * update enable_mkldnn option * rerun CI |
||
---|---|---|
.. | ||
android_demo | ||
avh | ||
cpp_infer | ||
docker/hubserving | ||
hubserving | ||
ios_demo | ||
lite | ||
paddle2onnx | ||
paddlecloud | ||
slim | ||
README.md | ||
README_ch.md |
README.md
English | 简体中文
PP-OCR Deployment
Paddle Deployment Introduction
Paddle provides a variety of deployment schemes to meet the deployment requirements of different scenarios. Please choose according to the actual situation:

PP-OCR Deployment
PP-OCR has supported multi deployment schemes. Click the link to get the specific tutorial.
- Python Inference
- C++ Inference
- Serving (Python/C++)
- Paddle-Lite (ARM CPU/OpenCL ARM GPU)
- Paddle2ONNX
If you need the deployment tutorial of academic algorithm models other than PP-OCR, please directly enter the main page of corresponding algorithms, entrance。