Co-authored-by: zhangyubo0722 <zangyubo0722@163.com> |
||
---|---|---|
.github | ||
applications | ||
benchmark | ||
configs | ||
deploy | ||
doc/fonts | ||
docs | ||
overrides | ||
paddleocr | ||
ppocr | ||
ppstructure | ||
test_tipc | ||
tests | ||
tools | ||
.clang_format.hook | ||
.gitignore | ||
.pre-commit-config.yaml | ||
.style.yapf | ||
LICENSE | ||
MANIFEST.in | ||
README.md | ||
README_en.md | ||
awesome_projects.md | ||
mkdocs.yml | ||
pyproject.toml | ||
requirements.txt | ||
setup.py | ||
train.sh |
README_en.md
🚀 Introduction
Since its initial release, PaddleOCR has gained widespread acclaim across academia, industry, and research communities, thanks to its cutting-edge algorithms and proven performance in real-world applications. It’s already powering popular open-source projects like Umi-OCR, OmniParser, MinerU, and RAGFlow, making it the go-to OCR toolkit for developers worldwide.
On May 20, 2025, the PaddlePaddle team unveiled PaddleOCR 3.0, fully compatible with the official release of the PaddlePaddle 3.0 framework. This update further boosts text-recognition accuracy, adds support for multiple text-type recognition and handwriting recognition, and meets the growing demand from large-model applications for high-precision parsing of complex documents. When combined with the ERNIE 4.5T, it significantly enhances key-information extraction accuracy. PaddleOCR 3.0 also introduces support for domestic hardware platforms such as KUNLUNXIN and Ascend.
Three Major New Features in PaddleOCR 3.0:
-
Universal-Scene Text Recognition Model PP-OCRv5: A single model that handles five different text types plus complex handwriting. Overall recognition accuracy has increased by 13 percentage points over the previous generation. Online Demo
-
General Document-Parsing Solution PP-StructureV3: Delivers high-precision parsing of multi-layout, multi-scene PDFs, outperforming many open- and closed-source solutions on public benchmarks. Online Demo
-
Intelligent Document-Understanding Solution PP-ChatOCRv4: Natively powered by the WenXin large model 4.5T, achieving 15 percentage points higher accuracy than its predecessor. Online Demo
In addition to providing an outstanding model library, PaddleOCR 3.0 also offers user-friendly tools covering model training, inference, and service deployment, so developers can rapidly bring AI applications to production.
📣 Recent updates
🔥🔥2025.05.20: Official Release of PaddleOCR v3.0, including:
-
PP-OCRv5: High-Accuracy Text Recognition Model for All Scenarios - Instant Text from Images/PDFs.
- 🌐 Single-model support for five text types - Seamlessly process Simplified Chinese, Traditional Chinese, Simplified Chinese Pinyin, English and Japanse within a single model.
- ✍️ Improved handwriting recognition: Significantly better at complex cursive scripts and non-standard handwriting.
- 🎯 13-point accuracy gain over PP-OCRv4, achieving state-of-the-art performance across a variety of real-world scenarios.
-
PP-StructureV3: General-Purpose Document Parsing – Unleash SOTA Images/PDFs Parsing for Real-World Scenarios!
- 🧮 High-Accuracy multi-scene PDF parsing, leading both open- and closed-source solutions on the OmniDocBench benchmark.
- 🧠 Specialized capabilities include seal recognition, chart-to-table conversion, table recognition with nested formulas/images, vertical text document parsing, and complex table structure analysis.
-
PP-ChatOCRv4: Intelligent Document Understanding – Extract Key Information, not just text from Images/PDFs.
- 🔥 15-point accuracy gain in key-information extraction on PDF/PNG/JPG files over the previous generation.
- 💻 Native support for ERINE4.5 Turbo, with compatibility for large-model deployments via PaddleNLP, Ollama, vLLM, and more.
- 🤝 Integrated PP-DocBee2, enabling extraction and understanding of printed text, handwriting, seals, tables, charts, and other common elements in complex documents.
The history of updates
-
🔥🔥2025.03.07: Release of PaddleOCR v2.10, including:
- 12 new self-developed models:
- Layout Detection series(3 models): PP-DocLayout-L, M, and S -- capable of detecting 23 common layout types across diverse document formats(papers, reports, exams, books, magazines, contracts, etc.) in English and Chinese. Achieves up to 90.4% mAP@0.5 , and lightweight features can process over 100 pages per second.
- Formula Recognition series(2 models): PP-FormulaNet-L and S -- supports recognition of 50,000+ LaTeX expressions, handling both printed and handwritten formulas. PP-FormulaNet-L offers 6% higher accuracy than comparable models; PP-FormulaNet-S is 16x faster while maintaining similar accuracy.
- Table Structure Recognition series(2 models): SLANeXt_wired and SLANeXt_wireless -- newly developed models with 6% accuracy improvement over SLANet_plus in complex table recognition.
- Table Classification(1 model): PP-LCNet_x1_0_table_cls -- an ultra-lightweight classifier for wired and wireless tables.
- 12 new self-developed models:
⚡ Quick Start
1. Run online demo
2. Installation
Install PaddlePaddle refer to Installation Guide, after then, install the PaddleOCR toolkit.
# Install paddleocr
pip install paddleocr==3.0.0
3. Run inference by CLI
# Run PP-OCRv5 inference
paddleocr ocr -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png --use_doc_orientation_classify False --use_doc_unwarping False --use_textline_orientation False
# Run PP-StructureV3 inference
paddleocr pp_structurev3 -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png --use_doc_orientation_classify False --use_doc_unwarping False
# Get the Qianfan API Key at first, and then run PP-ChatOCRv4 inference
paddleocr pp_chatocrv4_doc -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png -k 驾驶室准乘人数 --qianfan_api_key your_api_key --use_doc_orientation_classify False --use_doc_unwarping False
# Get more information about "paddleocr ocr"
paddleocr ocr --help
4. Run inference by API
4.1 PP-OCRv5 Example
# Initialize PaddleOCR instance
ocr = PaddleOCR(
use_doc_orientation_classify=False,
use_doc_unwarping=False,
use_textline_orientation=False)
# Run OCR inference on a sample image
result = ocr.predict(
input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png")
# Visualize the results and save the JSON results
for res in result:
res.print()
res.save_to_img("output")
res.save_to_json("output")
4.2 PP-StructureV3 Example
from pathlib import Path
from paddleocr import PPStructureV3
pipeline = PPStructureV3()
# For Image
output = pipeline.predict(
input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png",
use_doc_orientation_classify=False,
use_doc_unwarping=False
)
# Visualize the results and save the JSON results
for res in output:
res.print()
res.save_to_json(save_path="output")
res.save_to_markdown(save_path="output")
4.3 PP-ChatOCRv4 Example
from paddleocr import PPChatOCRv4Doc
chat_bot_config = {
"module_name": "chat_bot",
"model_name": "ernie-3.5-8k",
"base_url": "https://qianfan.baidubce.com/v2",
"api_type": "openai",
"api_key": "api_key", # your api_key
}
retriever_config = {
"module_name": "retriever",
"model_name": "embedding-v1",
"base_url": "https://qianfan.baidubce.com/v2",
"api_type": "qianfan",
"api_key": "api_key", # your api_key
}
mllm_chat_bot_config = {
"module_name": "chat_bot",
"model_name": "PP-DocBee",
"base_url": "http://127.0.0.1:8080/", # your local mllm service url
"api_type": "openai",
"api_key": "api_key", # your api_key
}
pipeline = PPChatOCRv4Doc(
use_doc_orientation_classify=False,
use_doc_unwarping=False)
visual_predict_res = pipeline.visual_predict(
input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png",
use_common_ocr=True,
use_seal_recognition=True,
use_table_recognition=True,
)
visual_info_list = []
for res in visual_predict_res:
visual_info_list.append(res["visual_info"])
layout_parsing_result = res["layout_parsing_result"]
vector_info = pipeline.build_vector(
visual_info_list, flag_save_bytes_vector=True, retriever_config=retriever_config
)
mllm_predict_res = pipeline.mllm_pred(
input="vehicle_certificate-1.png",
key_list=["驾驶室准乘人数"],
mllm_chat_bot_config=mllm_chat_bot_config,
)
mllm_predict_info = mllm_predict_res["mllm_res"]
chat_result = pipeline.chat(
key_list=["驾驶室准乘人数"],
visual_info=visual_info_list,
vector_info=vector_info,
mllm_predict_info=mllm_predict_info,
chat_bot_config=chat_bot_config,
retriever_config=retriever_config,
)
print(chat_result)
5. Domestic AI Accelerators
⛰️ Advanced Tutorials
🔄 Quick Overview of Execution Results
👩👩👧👦 Community
PaddlePaddle WeChat official account | Join the tech discussion group |
---|---|
![]() |
![]() |
😃 Awesome Projects Leveraging PaddleOCR
PaddleOCR wouldn’t be where it is today without its incredible community! 💗 A massive thank you to all our longtime partners, new collaborators, and everyone who’s poured their passion into PaddleOCR — whether we’ve named you or not. Your support fuels our fire!
Project Name | Description |
---|---|
RAGFlow |
RAG engine based on deep document understanding. |
MinerU |
Multi-type Document to Markdown Conversion Tool |
Umi-OCR |
Free, Open-source, Batch Offline OCR Software. |
OmniParser |
OmniParser: Screen Parsing tool for Pure Vision Based GUI Agent. |
QAnything |
Question and Answer based on Anything. |
PDF-Extract-Kit |
A powerful open-source toolkit designed to efficiently extract high-quality content from complex and diverse PDF documents. |
Dango-Translator |
Recognize text on the screen, translate it and show the translation results in real time. |
Learn more projects | More projects based on PaddleOCR |
👩👩👧👦 Contributors
🌟 Star
📄 License
This project is released under the Apache 2.0 license.
🎓 Citation
@misc{paddleocr2020,
title={PaddleOCR, Awesome multilingual OCR toolkits based on PaddlePaddle.},
author={PaddlePaddle Authors},
howpublished = {\url{https://github.com/PaddlePaddle/PaddleOCR}},
year={2020}
}