2025-05-18 21:09:53 +08:00
|
|
|
|
---
|
|
|
|
|
comments: true
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
# 文本识别模块使用教程
|
|
|
|
|
|
|
|
|
|
## 一、概述
|
|
|
|
|
|
|
|
|
|
文本识别模块是OCR(光学字符识别)系统中的核心部分,负责从图像中的文本区域提取出文本信息。该模块的性能直接影响到整个OCR系统的准确性和效率。文本识别模块通常接收文本检测模块输出的文本区域的边界框(Bounding Boxes)作为输入,然后通过复杂的图像处理和深度学习算法,将图像中的文本转化为可编辑和可搜索的电子文本。文本识别结果的准确性,对于后续的信息提取和数据挖掘等应用至关重要。
|
|
|
|
|
|
|
|
|
|
## 二、支持模型列表
|
|
|
|
|
|
|
|
|
|
<table>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>模型</th><th>模型下载链接</th>
|
|
|
|
|
<th>识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>GPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>CPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>模型存储大小(M)</th>
|
|
|
|
|
<th>介绍</th>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
<td>PP-OCRv5_server_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
PP-OCRv5_server_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_server_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>86.38</td>
|
2025-05-20 18:08:13 +08:00
|
|
|
|
<td> 8.45/2.36 </td>
|
|
|
|
|
<td> 122.69/122.69 </td>
|
|
|
|
|
<td>81 M</td>
|
|
|
|
|
<td rowspan="2">PP-OCRv5_rec 是新一代文本识别模型。该模型致力于以单一模型高效、精准地支持简体中文、繁体中文、英文、日文四种主要语言,以及手写、竖版、拼音、生僻字等复杂文本场景的识别。在保持识别效果的同时,兼顾推理速度和模型鲁棒性,为各种场景下的文档理解提供高效、精准的技术支撑。</td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>PP-OCRv5_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
PP-OCRv5_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>81.29</td>
|
2025-05-20 18:08:13 +08:00
|
|
|
|
<td> 1.46/5.43 </td>
|
|
|
|
|
<td> 5.32/91.79 </td>
|
|
|
|
|
<td>16 M</td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>PP-OCRv4_server_rec_doc</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
PP-OCRv4_server_rec_doc_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_rec_doc_pretrained.pdparams">训练模型</a></td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
<td>86.58</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>6.65 / 2.38</td>
|
|
|
|
|
<td>32.92 / 32.92</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>181 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>PP-OCRv4_server_rec_doc是在PP-OCRv4_server_rec的基础上,在更多中文文档数据和PP-OCR训练数据的混合数据训练而成,增加了部分繁体字、日文、特殊字符的识别能力,可支持识别的字符为1.5万+,除文档相关的文字识别能力提升外,也同时提升了通用文字的识别能力</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>PP-OCRv4_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv4_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
<td>83.28</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>4.82 / 1.20</td>
|
|
|
|
|
<td>16.74 / 4.64</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>88 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>PP-OCRv4的轻量级识别模型,推理效率高,可以部署在包含端侧设备的多种硬件设备中</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>PP-OCRv4_server_rec </td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv4_server_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_rec_pretrained.pdparams">训练模型</a></td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
<td>85.19 </td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>6.58 / 2.43</td>
|
|
|
|
|
<td>33.17 / 33.17</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>151 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>PP-OCRv4的服务器端模型,推理精度高,可以部署在多种不同的服务器上</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>en_PP-OCRv4_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
en_PP-OCRv4_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/en_PP-OCRv4_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>70.39</td>
|
|
|
|
|
<td>4.81 / 0.75</td>
|
|
|
|
|
<td>16.10 / 5.31</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>66 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv4识别模型训练得到的超轻量英文识别模型,支持英文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
2025-05-19 20:22:16 +08:00
|
|
|
|
> ❗ 以上列出的是文本识别模块重点支持的<b>6个核心模型</b>,该模块总共支持<b>20个全量模型</b>,包含多个多语言文本识别模型,完整的模型列表如下:
|
2025-05-18 21:09:53 +08:00
|
|
|
|
|
|
|
|
|
<details><summary> 👉模型列表详情</summary>
|
|
|
|
|
|
2025-05-19 20:22:16 +08:00
|
|
|
|
* <b>PP-OCRv5 多场景模型</b>
|
|
|
|
|
|
|
|
|
|
<table>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>模型</th><th>模型下载链接</th>
|
|
|
|
|
<th>中文识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>英文识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>繁体中文识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>日文识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>GPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>CPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>模型存储大小(M)</th>
|
|
|
|
|
<th>介绍</th>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>PP-OCRv5_server_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
PP-OCRv5_server_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_server_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>86.38</td>
|
|
|
|
|
<td>64.70</td>
|
|
|
|
|
<td>93.29</td>
|
|
|
|
|
<td>60.35</td>
|
2025-05-20 18:08:13 +08:00
|
|
|
|
<td> 1.46/5.43 </td>
|
|
|
|
|
<td> 5.32/91.79 </td>
|
|
|
|
|
<td>81 M</td>
|
|
|
|
|
<td rowspan="2">PP-OCRv5_rec 是新一代文本识别模型。该模型致力于以单一模型高效、精准地支持简体中文、繁体中文、英文、日文四种主要语言,以及手写、竖版、拼音、生僻字等复杂文本场景的识别。在保持识别效果的同时,兼顾推理速度和模型鲁棒性,为各种场景下的文档理解提供高效、精准的技术支撑。</td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>PP-OCRv5_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
PP-OCRv5_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>81.29</td>
|
|
|
|
|
<td>66.00</td>
|
|
|
|
|
<td>83.55</td>
|
|
|
|
|
<td>54.65</td>
|
2025-05-20 18:08:13 +08:00
|
|
|
|
<td> 1.46/5.43 </td>
|
|
|
|
|
<td> 5.32/91.79 </td>
|
|
|
|
|
<td>16 M</td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
2025-05-18 21:09:53 +08:00
|
|
|
|
* <b>中文识别模型</b>
|
|
|
|
|
<table>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>模型</th><th>模型下载链接</th>
|
|
|
|
|
<th>识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>GPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>CPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>模型存储大小(M)</th>
|
|
|
|
|
<th>介绍</th>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>PP-OCRv4_server_rec_doc</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
PP-OCRv4_server_rec_doc_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_rec_doc_pretrained.pdparams">训练模型</a></td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
<td>86.58</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>6.65 / 2.38</td>
|
|
|
|
|
<td>32.92 / 32.92</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>181 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>PP-OCRv4_server_rec_doc是在PP-OCRv4_server_rec的基础上,在更多中文文档数据和PP-OCR训练数据的混合数据训练而成,增加了部分繁体字、日文、特殊字符的识别能力,可支持识别的字符为1.5万+,除文档相关的文字识别能力提升外,也同时提升了通用文字的识别能力</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>PP-OCRv4_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv4_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
<td>83.28</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>4.82 / 1.20</td>
|
|
|
|
|
<td>16.74 / 4.64</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>88 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>PP-OCRv4的轻量级识别模型,推理效率高,可以部署在包含端侧设备的多种硬件设备中</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>PP-OCRv4_server_rec </td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/PP-OCRv4_server_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv4_server_rec_pretrained.pdparams">训练模型</a></td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
<td>85.19 </td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>6.58 / 2.43</td>
|
|
|
|
|
<td>33.17 / 33.17</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>151 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>PP-OCRv4的服务器端模型,推理精度高,可以部署在多种不同的服务器上</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
2025-05-19 20:22:16 +08:00
|
|
|
|
<td>75.43</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>5.87 / 1.19</td>
|
|
|
|
|
<td>9.07 / 4.28</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>138 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>PP-OCRv3的轻量级识别模型,推理效率高,可以部署在包含端侧设备的多种硬件设备中</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
<table>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>模型</th><th>模型下载链接</th>
|
|
|
|
|
<th>识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>GPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>CPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>模型存储大小(M)</th>
|
|
|
|
|
<th>介绍</th>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>ch_SVTRv2_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/ch_SVTRv2_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/ch_SVTRv2_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>68.81</td>
|
|
|
|
|
<td>8.08 / 2.74</td>
|
|
|
|
|
<td>50.17 / 42.50</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>126 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td rowspan="1">
|
|
|
|
|
SVTRv2 是一种由复旦大学视觉与学习实验室(FVL)的OpenOCR团队研发的服务端文本识别模型,其在PaddleOCR算法模型挑战赛 - 赛题一:OCR端到端识别任务中荣获一等奖,A榜端到端识别精度相比PP-OCRv4提升6%。
|
|
|
|
|
</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
<table>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>模型</th><th>模型下载链接</th>
|
|
|
|
|
<th>识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>GPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>CPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>模型存储大小(M)</th>
|
|
|
|
|
<th>介绍</th>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>ch_RepSVTR_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/ch_RepSVTR_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/ch_RepSVTR_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>65.07</td>
|
|
|
|
|
<td>5.93 / 1.62</td>
|
|
|
|
|
<td>20.73 / 7.32</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>70 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td rowspan="1"> RepSVTR 文本识别模型是一种基于SVTRv2 的移动端文本识别模型,其在PaddleOCR算法模型挑战赛 - 赛题一:OCR端到端识别任务中荣获一等奖,B榜端到端识别精度相比PP-OCRv4提升2.5%,推理速度持平。</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
* <b>英文识别模型</b>
|
|
|
|
|
<table>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>模型</th><th>模型下载链接</th>
|
|
|
|
|
<th>识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>GPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>CPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>模型存储大小(M)</th>
|
|
|
|
|
<th>介绍</th>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>en_PP-OCRv4_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
en_PP-OCRv4_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/en_PP-OCRv4_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td> 70.39</td>
|
|
|
|
|
<td>4.81 / 0.75</td>
|
|
|
|
|
<td>16.10 / 5.31</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>66 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv4识别模型训练得到的超轻量英文识别模型,支持英文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>en_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
en_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/en_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>70.69</td>
|
|
|
|
|
<td>5.44 / 0.75</td>
|
|
|
|
|
<td>8.65 / 5.57</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>85 M </td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量英文识别模型,支持英文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* <b>多语言识别模型</b>
|
|
|
|
|
<table>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>模型</th><th>模型下载链接</th>
|
|
|
|
|
<th>识别 Avg Accuracy(%)</th>
|
|
|
|
|
<th>GPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>CPU推理耗时(ms)<br/>[常规模式 / 高性能模式]</th>
|
|
|
|
|
<th>模型存储大小(M)</th>
|
|
|
|
|
<th>介绍</th>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>korean_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
korean_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/korean_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>60.21</td>
|
|
|
|
|
<td>5.40 / 0.97</td>
|
|
|
|
|
<td>9.11 / 4.05</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>114 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量韩文识别模型,支持韩文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>japan_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
japan_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/japan_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>45.69</td>
|
|
|
|
|
<td>5.70 / 1.02</td>
|
|
|
|
|
<td>8.48 / 4.07</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>120 M </td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量日文识别模型,支持日文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>chinese_cht_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
chinese_cht_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/chinese_cht_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>82.06</td>
|
|
|
|
|
<td>5.90 / 1.28</td>
|
|
|
|
|
<td>9.28 / 4.34</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>152 M </td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量繁体中文识别模型,支持繁体中文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>te_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
te_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/te_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>95.88</td>
|
|
|
|
|
<td>5.42 / 0.82</td>
|
|
|
|
|
<td>8.10 / 6.91</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>85 M </td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量泰卢固文识别模型,支持泰卢固文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>ka_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
ka_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/ka_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>96.96</td>
|
|
|
|
|
<td>5.25 / 0.79</td>
|
|
|
|
|
<td>9.09 / 3.86</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>85 M </td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量卡纳达文识别模型,支持卡纳达文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>ta_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
ta_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/ta_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>76.83</td>
|
|
|
|
|
<td>5.23 / 0.75</td>
|
|
|
|
|
<td>10.13 / 4.30</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>85 M </td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量泰米尔文识别模型,支持泰米尔文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>latin_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
latin_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/latin_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>76.93</td>
|
|
|
|
|
<td>5.20 / 0.79</td>
|
|
|
|
|
<td>8.83 / 7.15</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>85 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量拉丁文识别模型,支持拉丁文、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>arabic_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
arabic_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/arabic_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>73.55</td>
|
|
|
|
|
<td>5.35 / 0.79</td>
|
|
|
|
|
<td>8.80 / 4.56</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>85 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量阿拉伯字母识别模型,支持阿拉伯字母、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>cyrillic_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
cyrillic_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/cyrillic_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>94.28</td>
|
|
|
|
|
<td>5.23 / 0.76</td>
|
|
|
|
|
<td>8.89 / 3.88</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>85 M </td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量斯拉夫字母识别模型,支持斯拉夫字母、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>devanagari_PP-OCRv3_mobile_rec</td><td><a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0.0/\
|
|
|
|
|
devanagari_PP-OCRv3_mobile_rec_infer.tar">推理模型</a>/<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/devanagari_PP-OCRv3_mobile_rec_pretrained.pdparams">训练模型</a></td>
|
|
|
|
|
<td>96.44</td>
|
|
|
|
|
<td>5.22 / 0.79</td>
|
|
|
|
|
<td>8.56 / 4.06</td>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
<td>85 M</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>基于PP-OCRv3识别模型训练得到的超轻量梵文字母识别模型,支持梵文字母、数字识别</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
<strong>测试环境说明:</strong>
|
|
|
|
|
|
|
|
|
|
<ul>
|
|
|
|
|
<li><b>性能测试环境</b>
|
|
|
|
|
<ul>
|
|
|
|
|
<li><strong>测试数据集:</strong>
|
|
|
|
|
<ul>
|
|
|
|
|
<li>
|
|
|
|
|
中文识别模型: PaddleOCR 自建的中文数据集,覆盖街景、网图、文档、手写多个场景,其中文本识别包含 1.1w 张图片。
|
|
|
|
|
</li>
|
|
|
|
|
<li>
|
|
|
|
|
ch_SVTRv2_rec:<a href="https://aistudio.baidu.com/competition/detail/1131/0/introduction">PaddleOCR算法模型挑战赛 - 赛题一:OCR端到端识别任务</a>A榜评估集。
|
|
|
|
|
</li>
|
|
|
|
|
<li>
|
|
|
|
|
ch_RepSVTR_rec:<a href="https://aistudio.baidu.com/competition/detail/1131/0/introduction">PaddleOCR算法模型挑战赛 - 赛题一:OCR端到端识别任务</a>B榜评估集。
|
|
|
|
|
</li>
|
|
|
|
|
<li>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
英文识别模型:PaddleOCR 自建的英文数据集。
|
2025-05-18 21:09:53 +08:00
|
|
|
|
</li>
|
|
|
|
|
<li>
|
2025-05-20 15:56:36 +08:00
|
|
|
|
多语言识别模型:PaddleOCR 自建的多语种数据集。
|
2025-05-18 21:09:53 +08:00
|
|
|
|
</li>
|
|
|
|
|
</ul>
|
|
|
|
|
</li>
|
|
|
|
|
<li><strong>硬件配置:</strong>
|
|
|
|
|
<ul>
|
|
|
|
|
<li>GPU:NVIDIA Tesla T4</li>
|
|
|
|
|
<li>CPU:Intel Xeon Gold 6271C @ 2.60GHz</li>
|
|
|
|
|
<li>其他环境:Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2</li>
|
|
|
|
|
</ul>
|
|
|
|
|
</li>
|
|
|
|
|
</ul>
|
|
|
|
|
</li>
|
|
|
|
|
<li><b>推理模式说明</b></li>
|
|
|
|
|
</ul>
|
|
|
|
|
|
|
|
|
|
<table border="1">
|
|
|
|
|
<thead>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>模式</th>
|
|
|
|
|
<th>GPU配置</th>
|
|
|
|
|
<th>CPU配置</th>
|
|
|
|
|
<th>加速技术组合</th>
|
|
|
|
|
</tr>
|
|
|
|
|
</thead>
|
|
|
|
|
<tbody>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>常规模式</td>
|
|
|
|
|
<td>FP32精度 / 无TRT加速</td>
|
|
|
|
|
<td>FP32精度 / 8线程</td>
|
|
|
|
|
<td>PaddleInference</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td>高性能模式</td>
|
|
|
|
|
<td>选择先验精度类型和加速策略的最优组合</td>
|
|
|
|
|
<td>FP32精度 / 8线程</td>
|
|
|
|
|
<td>选择先验最优后端(Paddle/OpenVINO/TRT等)</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</tbody>
|
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
</details>
|
|
|
|
|
|
|
|
|
|
## 三、快速开始
|
|
|
|
|
|
2025-05-20 03:29:47 +08:00
|
|
|
|
> ❗ 在快速开始前,请先安装 PaddleOCR 的 wheel 包,详细请参考 [安装教程](../installation.md)。
|
|
|
|
|
|
2025-05-18 21:09:53 +08:00
|
|
|
|
使用一行命令即可快速体验:
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
paddleocr text_recognition -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_rec_001.png
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
您也可以将文本识别的模块中的模型推理集成到您的项目中。运行以下代码前,请您下载[示例图片](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_rec_001.png)到本地。
|
|
|
|
|
|
|
|
|
|
```python
|
|
|
|
|
from paddleocr import TextRecognition
|
2025-05-19 22:42:31 +08:00
|
|
|
|
model = TextRecognition(model_name="PP-OCRv5_server_rec")
|
2025-05-18 21:09:53 +08:00
|
|
|
|
output = model.predict(input="general_ocr_rec_001.png", batch_size=1)
|
|
|
|
|
for res in output:
|
|
|
|
|
res.print()
|
|
|
|
|
res.save_to_img(save_path="./output/")
|
|
|
|
|
res.save_to_json(save_path="./output/res.json")
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
运行后,得到的结果为:
|
|
|
|
|
```bash
|
2025-05-19 22:42:31 +08:00
|
|
|
|
{'res': {'input_path': 'general_ocr_rec_001.png', 'page_index': None, 'rec_text': '绿洲仕格维花园公寓', 'rec_score': 0.9823867082595825}}
|
2025-05-18 21:09:53 +08:00
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
运行结果参数含义如下:
|
|
|
|
|
- `input_path`:表示输入待预测文本行图像的路径
|
|
|
|
|
- `page_index`:如果输入是PDF文件,则表示当前是PDF的第几页,否则为 `None`
|
|
|
|
|
- `rec_text`:表示文本行图像的预测文本
|
|
|
|
|
- `rec_score`:表示文本行图像的预测置信度
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
可视化图片如下:
|
|
|
|
|
|
|
|
|
|
<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/modules/text_recog/general_ocr_rec_001.png"/>
|
|
|
|
|
|
|
|
|
|
相关方法、参数等说明如下:
|
|
|
|
|
|
2025-05-19 22:42:31 +08:00
|
|
|
|
* `TextRecognition`实例化文本识别模型(此处以`PP-OCRv5_server_rec`为例),具体说明如下:
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<table>
|
|
|
|
|
<thead>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>参数</th>
|
|
|
|
|
<th>参数说明</th>
|
|
|
|
|
<th>参数类型</th>
|
|
|
|
|
<th>可选项</th>
|
|
|
|
|
<th>默认值</th>
|
|
|
|
|
</tr>
|
|
|
|
|
</thead>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>model_name</code></td>
|
|
|
|
|
<td>模型名称</td>
|
|
|
|
|
<td><code>str</code></td>
|
2025-05-20 18:37:15 +08:00
|
|
|
|
<td>所有支持的模型名称</td>
|
2025-05-18 21:09:53 +08:00
|
|
|
|
<td>无</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>model_dir</code></td>
|
|
|
|
|
<td>模型存储路径</td>
|
|
|
|
|
<td><code>str</code></td>
|
|
|
|
|
<td>无</td>
|
|
|
|
|
<td>无</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>device</code></td>
|
|
|
|
|
<td>模型推理设备</td>
|
|
|
|
|
<td><code>str</code></td>
|
|
|
|
|
<td>支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。</td>
|
|
|
|
|
<td><code>gpu:0</code></td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>use_hpip</code></td>
|
|
|
|
|
<td>是否启用高性能推理插件</td>
|
|
|
|
|
<td><code>bool</code></td>
|
|
|
|
|
<td>无</td>
|
|
|
|
|
<td><code>False</code></td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>hpi_config</code></td>
|
|
|
|
|
<td>高性能推理配置</td>
|
|
|
|
|
<td><code>dict</code> | <code>None</code></td>
|
|
|
|
|
<td>无</td>
|
|
|
|
|
<td><code>None</code></td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
2025-05-20 18:37:15 +08:00
|
|
|
|
* 其中,`model_name` 必须指定,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。
|
2025-05-18 21:09:53 +08:00
|
|
|
|
|
|
|
|
|
* 调用文本识别模型的 `predict()` 方法进行推理预测,该方法会返回一个结果列表。另外,本模块还提供了 `predict_iter()` 方法。两者在参数接受和结果返回方面是完全一致的,区别在于 `predict_iter()` 返回的是一个 `generator`,能够逐步处理和获取预测结果,适合处理大型数据集或希望节省内存的场景。可以根据实际需求选择使用这两种方法中的任意一种。`predict()` 方法参数有 `input` 和 `batch_size`,具体说明如下:
|
|
|
|
|
|
|
|
|
|
<table>
|
|
|
|
|
<thead>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>参数</th>
|
|
|
|
|
<th>参数说明</th>
|
|
|
|
|
<th>参数类型</th>
|
|
|
|
|
<th>可选项</th>
|
|
|
|
|
<th>默认值</th>
|
|
|
|
|
</tr>
|
|
|
|
|
</thead>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>input</code></td>
|
|
|
|
|
<td>待预测数据,支持多种输入类型</td>
|
|
|
|
|
<td><code>Python Var</code>/<code>str</code>/<code>list</code></td>
|
|
|
|
|
<td>
|
|
|
|
|
<ul>
|
|
|
|
|
<li><b>Python变量</b>,如<code>numpy.ndarray</code>表示的图像数据</li>
|
|
|
|
|
<li><b>文件路径</b>,如图像文件的本地路径:<code>/root/data/img.jpg</code></li>
|
|
|
|
|
<li><b>URL链接</b>,如图像文件的网络URL:<a href="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_rec_001.png">示例</a></li>
|
|
|
|
|
<li><b>本地目录</b>,该目录下需包含待预测数据文件,如本地路径:<code>/root/data/</code></li>
|
|
|
|
|
<li><b>列表</b>,列表元素需为上述类型数据,如<code>[numpy.ndarray, numpy.ndarray]</code>,<code>["/root/data/img1.jpg", "/root/data/img2.jpg"]</code>,<code>["/root/data1", "/root/data2"]</code></li>
|
|
|
|
|
</ul>
|
|
|
|
|
</td>
|
|
|
|
|
<td>无</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>batch_size</code></td>
|
|
|
|
|
<td>批大小</td>
|
|
|
|
|
<td><code>int</code></td>
|
|
|
|
|
<td>任意整数</td>
|
|
|
|
|
<td>1</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
* 对预测结果进行处理,每个样本的预测结果均为对应的Result对象,且支持打印、保存为图片、保存为`json`文件的操作:
|
|
|
|
|
|
|
|
|
|
<table>
|
|
|
|
|
<thead>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>方法</th>
|
|
|
|
|
<th>方法说明</th>
|
|
|
|
|
<th>参数</th>
|
|
|
|
|
<th>参数类型</th>
|
|
|
|
|
<th>参数说明</th>
|
|
|
|
|
<th>默认值</th>
|
|
|
|
|
</tr>
|
|
|
|
|
</thead>
|
|
|
|
|
<tr>
|
|
|
|
|
<td rowspan="3"><code>print()</code></td>
|
|
|
|
|
<td rowspan="3">打印结果到终端</td>
|
|
|
|
|
<td><code>format_json</code></td>
|
|
|
|
|
<td><code>bool</code></td>
|
|
|
|
|
<td>是否对输出内容进行使用 <code>JSON</code> 缩进格式化</td>
|
|
|
|
|
<td><code>True</code></td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>indent</code></td>
|
|
|
|
|
<td><code>int</code></td>
|
|
|
|
|
<td>指定缩进级别,以美化输出的 <code>JSON</code> 数据,使其更具可读性,仅当 <code>format_json</code> 为 <code>True</code> 时有效</td>
|
|
|
|
|
<td>4</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>ensure_ascii</code></td>
|
|
|
|
|
<td><code>bool</code></td>
|
|
|
|
|
<td>控制是否将非 <code>ASCII</code> 字符转义为 <code>Unicode</code>。设置为 <code>True</code> 时,所有非 <code>ASCII</code> 字符将被转义;<code>False</code> 则保留原始字符,仅当<code>format_json</code>为<code>True</code>时有效</td>
|
|
|
|
|
<td><code>False</code></td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td rowspan="3"><code>save_to_json()</code></td>
|
|
|
|
|
<td rowspan="3">将结果保存为json格式的文件</td>
|
|
|
|
|
<td><code>save_path</code></td>
|
|
|
|
|
<td><code>str</code></td>
|
|
|
|
|
<td>保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致</td>
|
|
|
|
|
<td>无</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>indent</code></td>
|
|
|
|
|
<td><code>int</code></td>
|
|
|
|
|
<td>指定缩进级别,以美化输出的 <code>JSON</code> 数据,使其更具可读性,仅当 <code>format_json</code> 为 <code>True</code> 时有效</td>
|
|
|
|
|
<td>4</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>ensure_ascii</code></td>
|
|
|
|
|
<td><code>bool</code></td>
|
|
|
|
|
<td>控制是否将非 <code>ASCII</code> 字符转义为 <code>Unicode</code>。设置为 <code>True</code> 时,所有非 <code>ASCII</code> 字符将被转义;<code>False</code> 则保留原始字符,仅当<code>format_json</code>为<code>True</code>时有效</td>
|
|
|
|
|
<td><code>False</code></td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td><code>save_to_img()</code></td>
|
|
|
|
|
<td>将结果保存为图像格式的文件</td>
|
|
|
|
|
<td><code>save_path</code></td>
|
|
|
|
|
<td><code>str</code></td>
|
|
|
|
|
<td>保存的文件路径,当为目录时,保存文件命名与输入文件类型命名一致</td>
|
|
|
|
|
<td>无</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
* 此外,也支持通过属性获取带结果的可视化图像和预测结果,具体如下:
|
|
|
|
|
|
|
|
|
|
<table>
|
|
|
|
|
<thead>
|
|
|
|
|
<tr>
|
|
|
|
|
<th>属性</th>
|
|
|
|
|
<th>属性说明</th>
|
|
|
|
|
</tr>
|
|
|
|
|
</thead>
|
|
|
|
|
<tr>
|
|
|
|
|
<td rowspan="1"><code>json</code></td>
|
|
|
|
|
<td rowspan="1">获取预测的<code>json</code>格式的结果</td>
|
|
|
|
|
</tr>
|
|
|
|
|
<tr>
|
|
|
|
|
<td rowspan="1"><code>img</code></td>
|
|
|
|
|
<td rowspan="1">获取格式为<code>dict</code>的可视化图像</td>
|
|
|
|
|
</tr>
|
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## 四、二次开发
|
|
|
|
|
|
2025-05-19 22:42:31 +08:00
|
|
|
|
如果以上模型在您的场景上效果仍然不理想,您可以尝试以下步骤进行二次开发,此处以训练 `PP-OCRv5_server_rec` 举例,其他模型替换对应配置文件即可。首先,您需要准备文本识别的数据集,可以参考[文本识别 Demo 数据](https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ocr_rec_dataset_examples.tar)的格式准备,准备好后,即可按照以下步骤进行模型训练和导出,导出后,可以将模型快速集成到上述 API 中。此处以文本识别 Demo 数据示例。在训练模型之前,请确保已经按照[安装文档](../installation.md)安装了 PaddleOCR 所需要的依赖。
|
2025-05-19 15:27:13 +08:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## 4.1 数据集、预训练模型准备
|
|
|
|
|
|
|
|
|
|
### 4.1.1 准备数据集
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
# 下载示例数据集
|
|
|
|
|
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ocr_rec_dataset_examples.tar
|
|
|
|
|
tar -xf ocr_rec_dataset_examples.tar
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 4.1.2 下载预训练模型
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
# 下载 PP-OCRv5_server_rec 预训练模型
|
|
|
|
|
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_server_rec_pretrained.pdparams
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### 4.2 模型训练
|
|
|
|
|
|
|
|
|
|
PaddleOCR 对代码进行了模块化,训练 `PP-OCRv5_server_rec` 识别模型时需要使用 `PP-OCRv5_server_rec` 的[配置文件](https://github.com/PaddlePaddle/PaddleOCR/blob/main/configs/rec/PP-OCRv5/PP-OCRv5_server_rec.yml)。
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
训练命令如下:
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
#单卡训练 (默认训练方式)
|
|
|
|
|
python3 tools/train.py -c configs/rec/PP-OCRv5/PP-OCRv5_server_rec.yml \
|
|
|
|
|
-o Global.pretrained_model=./PP-OCRv5_server_rec_pretrained.pdparams
|
2025-05-19 22:54:51 +08:00
|
|
|
|
|
2025-05-19 15:27:13 +08:00
|
|
|
|
#多卡训练,通过--gpus参数指定卡号
|
|
|
|
|
python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py -c configs/rec/PP-OCRv5/PP-OCRv5_server_rec.yml \
|
|
|
|
|
-o Global.pretrained_model=./PP-OCRv5_server_rec_pretrained.pdparams
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
2025-05-20 15:56:36 +08:00
|
|
|
|
### 4.3 模型评估
|
2025-05-19 15:27:13 +08:00
|
|
|
|
|
2025-05-19 22:42:31 +08:00
|
|
|
|
您可以评估已经训练好的权重,如,`output/xxx/xxx.pdparams`,使用如下命令进行评估:
|
2025-05-19 15:27:13 +08:00
|
|
|
|
|
|
|
|
|
```bash
|
2025-05-20 17:11:58 +08:00
|
|
|
|
#注意将pretrained_model的路径设置为本地路径。若使用自行训练保存的模型,请注意修改路径和文件名为{path/to/weights}/{model_name}。
|
|
|
|
|
#demo 测试集评估
|
|
|
|
|
python3 tools/eval.py -c configs/rec/PP-OCRv5/PP-OCRv5_server_rec.yml -o \
|
|
|
|
|
Global.pretrained_model=output/xxx/xxx.pdparams
|
|
|
|
|
```
|
2025-05-19 15:27:13 +08:00
|
|
|
|
|
2025-05-20 15:56:36 +08:00
|
|
|
|
### 4.4 模型导出
|
2025-05-19 15:27:13 +08:00
|
|
|
|
|
|
|
|
|
```bash
|
2025-05-20 17:11:58 +08:00
|
|
|
|
python3 tools/export_model.py -c configs/rec/PP-OCRv5/PP-OCRv5_server_rec.yml -o \
|
|
|
|
|
Global.pretrained_model=output/xxx/xxx.pdparams \
|
|
|
|
|
Global.save_inference_dir="./PP-OCRv5_server_rec_infer/"
|
|
|
|
|
```
|
2025-05-19 15:27:13 +08:00
|
|
|
|
|
|
|
|
|
导出模型后,静态图模型会存放于当前目录的`./PP-OCRv5_server_rec_infer/`中,在该目录下,您将看到如下文件:
|
|
|
|
|
```
|
|
|
|
|
./PP-OCRv5_server_rec_infer/
|
|
|
|
|
├── inference.json
|
|
|
|
|
├── inference.pdiparams
|
|
|
|
|
├── inference.yml
|
|
|
|
|
```
|
|
|
|
|
至此,二次开发完成,该静态图模型可以直接集成到 PaddleOCR 的 API 中。
|
|
|
|
|
|
|
|
|
|
## 五、FAQ
|