From 458afe607de283b5bb2bc9c8c3989a1b723e58f6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=AD=A6=E5=8D=BF?= <64625668+leo-q8@users.noreply.github.com> Date: Wed, 21 May 2025 15:27:27 +0800 Subject: [PATCH] refine ocr pipeline docs (#15286) --- docs/version3.x/pipeline_usage/OCR.en.md | 1566 +++++++++++----------- docs/version3.x/pipeline_usage/OCR.md | 8 +- 2 files changed, 821 insertions(+), 753 deletions(-) diff --git a/docs/version3.x/pipeline_usage/OCR.en.md b/docs/version3.x/pipeline_usage/OCR.en.md index a531766557..daf07ac457 100644 --- a/docs/version3.x/pipeline_usage/OCR.en.md +++ b/docs/version3.x/pipeline_usage/OCR.en.md @@ -49,7 +49,7 @@ In this pipeline, you can select models based on the benchmark test data provide
-Text Image Unwar'p Module (Optional): +Text Image Unwarp Module (Optional): @@ -227,14 +227,14 @@ PP-OCRv5_mobile_rec_infer.tar">Inference Model/Inference Model/Inference Model/Training Model - - + + - - + +
83.28 4.82 / 1.20 16.74 / 4.6488Lightweight model optimized for edge devices.11 MLightweight recognition model of PP-OCRv4 with high inference efficiency, deployable on various hardware devices including edge devices
PP-OCRv4_server_rec Inference Model/Training Model 85.19 6.58 / 2.43 33.17 / 33.17151High-accuracy server-side model.87 MServer-side model of PP-OCRv4 with high inference accuracy, deployable on various server platforms
PP-OCRv3_mobile_recInference Model/Inference Model/ -SVTRv2, developed by FVL's OpenOCR team, won first prize in the PaddleOCR Algorithm Challenge, improving end-to-end recognition accuracy by 6% over PP-OCRv4. +SVTRv2 is a server-side text recognition model developed by the OpenOCR team from Fudan University Vision and Learning Lab (FVL). It won first prize in the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition, improving end-to-end recognition accuracy by 6% compared to PP-OCRv4 on List A.
- - - - - + + + + + @@ -308,19 +308,19 @@ SVTRv2, developed by FVL's OpenOCR team, won first prize in the PaddleOCR Algori - - + +
ModelDownload LinksAccuracy(%)GPU Inference Time (ms)
[Standard / High-Performance]
CPU Inference Time (ms)
[Standard / High-Performance]
Model Size (MB)ModelDownload LinkRecognition Avg Accuracy(%)GPU Inference Time (ms)
[Standard Mode / High Performance Mode]
CPU Inference Time (ms)
[Standard Mode / High Performance Mode]
Model Size (M) Description
65.07 5.93 / 1.62 20.73 / 7.3270RepSVTR, a mobile-optimized version of SVTRv2, won first prize in the PaddleOCR Challenge, improving accuracy by 2.5% over PP-OCRv4 with comparable speed.22.1 MRepSVTR is a mobile text recognition model based on SVTRv2. It won first prize in the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition, improving end-to-end recognition accuracy by 2.5% compared to PP-OCRv4 on List B while maintaining comparable inference speed.
-* English Recognition Models +* English Recognition Models - - - - - + + + + + @@ -329,8 +329,8 @@ en_PP-OCRv4_mobile_rec_infer.tar">Inference Model/Inference Model/Inference Model/Inference Model/Inference Model/Inference Model/Inference Model/Inference Model/Inference Model/Inference Model/Inference Model/Inference Model/example), or directory (e.g., /root/data/); -
  • List: List of inputs, e.g., [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"].
  • - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
    Command line supports more parameter settings. Click to expand for detailed instructions on command line parameters. +
    ModelDownload LinksAccuracy(%)GPU Inference Time (ms)
    [Standard / High-Performance]
    CPU Inference Time (ms)
    [Standard / High-Performance]
    Model Size (MB)ModelDownload LinkRecognition Avg Accuracy(%)GPU Inference Time (ms)
    [Standard Mode / High Performance Mode]
    CPU Inference Time (ms)
    [Standard Mode / High Performance Mode]
    Model Size (M) Description
    Python Var|str|list
    save_pathPath to save inference results. If None, results are not saved locally.str
    doc_orientation_classify_model_nameName of the document orientation classification model. If None, the default pipeline model is used.strNone
    doc_orientation_classify_model_dirDirectory path of the document orientation classification model. If None, the official model is downloaded.strNone
    doc_unwarping_model_nameName of the text image correction model. If None, the default pipeline model is used.strNone
    doc_unwarping_model_dirDirectory path of the text image correction model. If None, the official model is downloaded.strNone
    text_detection_model_nameName of the text detection model. If None, the default pipeline model is used.strNone
    text_detection_model_dirDirectory path of the text detection model. If None, the official model is downloaded.strNone
    text_line_orientation_model_nameName of the text line orientation model. If None, the default pipeline model is used.strNone
    text_line_orientation_model_dirDirectory path of the text line orientation model. If None, the official model is downloaded.strNone
    text_line_orientation_batch_sizeBatch size for the text line orientation model. If None, defaults to 1.intNone
    text_recognition_model_nameName of the text recognition model. If None, the default pipeline model is used.strNone
    text_recognition_model_dirDirectory path of the text recognition model. If None, the official model is downloaded.strNone
    text_recognition_batch_sizeBatch size for the text recognition model. If None, defaults to 1.intNone
    use_doc_orientation_classifyWhether to enable document orientation classification. If None, defaults to pipeline initialization value (True).boolNone
    use_doc_unwarpingWhether to enable text image correction. If None, defaults to pipeline initialization value (True).boolNone
    use_textline_orientationWhether to enable text line orientation classification. If None, defaults to pipeline initialization value (True).boolNone
    text_det_limit_side_lenMaximum side length limit for text detection. -
      -
    • int: Any integer > 0;
    • -
    • None: If None, defaults to pipeline initialization value (960).
    • -
    -
    intNone
    text_det_limit_typeSide length limit type for text detection. -
      -
    • str: Supports min (ensures shortest side ≥ det_limit_side_len) or max (ensures longest side ≤ limit_side_len);
    • -
    • None: If None, defaults to pipeline initialization value (max).
    • -
    -
    strNone
    text_det_threshPixel threshold for text detection. Pixels with scores > this threshold are considered text. -
      -
    • float: Any float > 0;
    • -
    • None: If None, defaults to pipeline initialization value (0.3).
    • -
    -
    floatNone
    text_det_box_threshBox threshold for text detection. Detected regions with average scores > this threshold are retained. -
      -
    • float: Any float > 0;
    • -
    • None: If None, defaults to pipeline initialization value (0.6).
    • -
    -
    floatNone
    text_det_unclip_ratioExpansion ratio for text detection. Larger values expand text regions more. -
      -
    • float: Any float > 0;
    • -
    • None: If None, defaults to pipeline initialization value (2.0).
    • -
    -
    floatNone
    text_det_input_shapeInput shape for text detection.tupleNone
    text_rec_score_threshScore threshold for text recognition. Results with scores > this threshold are retained. -
      -
    • float: Any float > 0;
    • -
    • None: If None, defaults to pipeline initialization value (0.0, no threshold).
    • -
    -
    floatNone
    text_rec_input_shapeInput shape for text recognition.tupleNone
    langSpecifies the OCR model language. -
      -
    • ch: Chinese;
    • -
    • en: English;
    • -
    • korean: Korean;
    • -
    • japan: Japanese;
    • -
    • chinese_cht: Traditional Chinese;
    • -
    • te: Telugu;
    • -
    • ka: Kannada;
    • -
    • ta: Tamil;
    • -
    • None: If None, defaults to ch.
    • -
    -
    strNone
    ocr_versionOCR model version. -
      -
    • PP-OCRv5: Uses PP-OCRv5 models;
    • -
    • PP-OCRv4: Uses PP-OCRv4 models;
    • -
    • PP-OCRv3: Uses PP-OCRv3 models;
    • -
    • None: If None, defaults to PP-OCRv5 models.
    • -
    -
    strNone
    deviceDevice for inference. Supports: -
      -
    • CPU: cpu;
    • -
    • GPU: gpu:0 (first GPU);
    • -
    • NPU: npu:0;
    • -
    • XPU: xpu:0;
    • -
    • MLU: mlu:0;
    • -
    • DCU: dcu:0;
    • -
    • None: If None, defaults to GPU 0 (if available) or CPU.
    • -
    -
    strNone
    enable_hpiWhether to enable high-performance inference.boolFalse
    use_tensorrtWhether to use TensorRT for acceleration.boolFalse
    min_subgraph_sizeMinimum subgraph size for model optimization.int3
    precisionComputation precision (e.g., fp32, fp16).strfp32
    enable_mkldnnWhether to enable MKL-DNN acceleration. If None, enabled by default.boolNone
    cpu_threadsNumber of CPU threads for inference.int8
    + - - + + + + + + + + + + + + + + + + - -
    paddlex_configPath to PaddleX pipeline configuration file.ParameterParameter DescriptionParameter TypeDefault Value
    inputData to be predicted, supporting multiple input types (required). +
      +
    • Python Var: Image data represented by numpy.ndarray
    • +
    • str: Local path of an image file or PDF file: /root/data/img.jpg; URL link, such as the network URL of an image file or PDF file: Example; Local directory, which must contain images to be predicted, such as the local path: /root/data/ (currently, predicting PDFs in a directory is not supported; PDFs need to specify the exact file path)
    • +
    • List: List elements must be of the above types, such as [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"]
    • +
    +
    Python Var|str|list
    save_pathPath to save inference result files. If set to None, inference results will not be saved locally. str None
    -
    -
    + +doc_orientation_classify_model_name +Name of the document orientation classification model. If set to None, the production line default model will be used. +str +None + + +doc_orientation_classify_model_dir +Directory path of the document orientation classification model. If set to None, the official model will be downloaded. +str +None + + +doc_unwarping_model_name +Name of the text image unwarping model. If set to None, the production line default model will be used. +str +None + + +doc_unwarping_model_dir +Directory path of the text image unwarping model. If set to None, the official model will be downloaded. +str +None + + +text_detection_model_name +Name of the text detection model. If set to None, the production line default model will be used. +str +None + + +text_detection_model_dir +Directory path of the text detection model. If set to None, the official model will be downloaded. +str +None + + +text_line_orientation_model_name +Name of the text line orientation model. If set to None, the production line default model will be used. +str +None + + +text_line_orientation_model_dir +Directory path of the text line orientation model. If set to None, the official model will be downloaded. +str +None + + +text_line_orientation_batch_size +Batch size for the text line orientation model. If set to None, the default batch size will be 1. +int +None + + +text_recognition_model_name +Name of the text recognition model. If set to None, the production line default model will be used. +str +None + + +text_recognition_model_dir +Directory path of the text recognition model. If set to None, the official model will be downloaded. +str +None + + +text_recognition_batch_size +Batch size for the text recognition model. If set to None, the default batch size will be 1. +int +None + + +use_doc_orientation_classify +Whether to use the document orientation classification function. If set to None, the production line's initialized value for this parameter (initialized to True) will be used. +bool +None + + +use_doc_unwarping +Whether to use the text image unwarping function. If set to None, the production line's initialized value for this parameter (initialized to True) will be used. +bool +None + + +use_textline_orientation +Whether to use the text line orientation function. If set to None, the production line's initialized value for this parameter (initialized to True) will be used. +bool +None + + +text_det_limit_side_len +Maximum side length limit for text detection. + + +int +None + + +text_det_limit_type +Type of side length limit for text detection. + + +str +None + + +text_det_thresh +Pixel threshold for text detection. In the output probability map, pixels with scores higher than this threshold will be considered text pixels. + + +float +None + + +text_det_box_thresh +Text detection box threshold. If the average score of all pixels within the detected result boundary is higher than this threshold, the result will be considered a text region. + + +float +None + + +text_det_unclip_ratio +Text detection expansion coefficient. This method is used to expand the text region—the larger the value, the larger the expanded area. + + +float +None + + +text_det_input_shape +Input shape for text detection. +tuple +None + + +text_rec_score_thresh +Text recognition threshold. Text results with scores higher than this threshold will be retained. + + +float +None + + +text_rec_input_shape +Input shape for text recognition. +tuple +None + + +lang +OCR model for a specified language. + + +str +None + + +ocr_version +OCR version. + + +str +None + + +det_model_dir +Deprecated. Please use text_detection_model_dir instead. Directory path of the text detection model. If set to None, the official model will be downloaded. +str +None + + +det_limit_side_len +Deprecated. Please use text_det_limit_side_len instead. Maximum side length limit for text detection. +int +None + + +det_limit_type +Deprecated. Please use text_det_limit_type instead. Type of side length limit for text detection. + + +str +None + + +det_db_thresh +Deprecated. Please use text_det_thresh instead. Pixel threshold for text detection. In the output probability map, pixels with scores higher than this threshold will be considered text pixels. + + +float +None + + +det_db_box_thresh +Deprecated. Please use text_det_box_thresh instead. Text detection box threshold. If the average score of all pixels within the detected result boundary is higher than this threshold, the result will be considered a text region. + + +float +None + + +det_db_unclip_ratio +Deprecated. Please use text_det_unclip_ratio instead. Text detection expansion coefficient. This method is used to expand the text region—the larger the value, the larger the expanded area. + + +float +None + + +rec_model_dir +Deprecated. Please use text_recognition_model_dir instead. Directory path of the text recognition model. If set to None, the official model will be downloaded. +str +None + + +rec_batch_num +Deprecated. Please use text_recognition_batch_size instead. Batch size for the text recognition model. If set to None, the default batch size will be 1. +int +None + + +use_angle_cls +Deprecated. Please use use_textline_orientation instead. Whether to use the text line orientation function. If set to None, the production line's initialized value for this parameter (initialized to True) will be used. +bool +None + + +cls_model_dir +Deprecated. Please use text_line_orientation_model_dir instead. Directory path of the text line orientation model. If set to None, the official model will be downloaded. +str +None + + +cls_batch_num +Deprecated. Please use text_line_orientation_batch_size instead. Batch size for the text line orientation model. If set to None, the default batch size will be 1. +int +None + + +device +Device for inference. Supports specifying a specific card number. + + +str +None + + +enable_hpi +Whether to enable high-performance inference. +bool +False + + +use_tensorrt +Whether to use TensorRT for inference acceleration. +bool +False + + +min_subgraph_size +Minimum subgraph size for optimizing model subgraph computation. +int +3 + + +precision +Computational precision, such as fp32, fp16. +str +fp32 + + +enable_mkldnn +Whether to enable the MKL-DNN acceleration library. If set to None, it will be enabled by default. + +bool +None + + +cpu_threads +Number of threads used for inference on CPU. +int +8 + + +paddlex_config +Path to the PaddleX production line configuration file. +str +None + + + + +
    Results are printed to the terminal: @@ -871,493 +952,484 @@ for res in result: res.save_to_json("output") ``` -The Python script above performs the following steps: +In the above Python script, the following steps are performed: -
    (1) Initialize the OCR pipeline with PaddleOCR(). Parameter details: +
    (1) Instantiate the OCR production line object via PaddleOCR(), with specific parameter descriptions as follows: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
    ParameterDescriptionTypeDefault
    doc_orientation_classify_model_nameName of the document orientation model. If None, uses the default pipeline model.strNone
    doc_orientation_classify_model_dirDirectory path of the document orientation model. If None, downloads the official model.strNone
    doc_unwarping_model_nameName of the text image correction model. If None, uses the default pipeline model.strNone
    doc_unwarping_model_dirDirectory path of the text image correction model. If None, downloads the official model.strNone
    text_detection_model_nameName of the text detection model. If None, uses the default pipeline model.strNone
    text_detection_model_dirDirectory path of the text detection model. If None, downloads the official model.strNone
    text_line_orientation_model_nameName of the text line orientation model. If None, uses the default pipeline model.strNone
    text_line_orientation_model_dirDirectory path of the text line orientation model. If None, downloads the official model.strNone
    text_line_orientation_batch_sizeBatch size for the text line orientation model. If None, defaults to 1.intNone
    text_recognition_model_nameName of the text recognition model. If None, uses the default pipeline model.strNone
    text_recognition_model_dirDirectory path of the text recognition model. If None, downloads the official model.strNone
    text_recognition_batch_sizeBatch size for the text recognition model. If None, defaults to 1.intNone
    use_doc_orientation_classifyWhether to enable document orientation classification. If None, defaults to pipeline initialization (True).boolNone
    use_doc_unwarpingWhether to enable text image correction. If None, defaults to pipeline initialization (True).boolNone
    use_textline_orientationWhether to enable text line orientation classification. If None, defaults to pipeline initialization (True).boolNone
    text_det_limit_side_lenMaximum side length limit for text detection. -
      -
    • int: Any integer > 0;
    • -
    • None: If None, defaults to pipeline initialization (960).
    • -
    -
    intNone
    text_det_limit_typeSide length limit type for text detection. -
      -
    • str: Supports min (ensures shortest side ≥ det_limit_side_len) or max (ensures longest side ≤ limit_side_len);
    • -
    • None: If None, defaults to pipeline initialization (max).
    • -
    -
    strNone
    text_det_threshPixel threshold for text detection. Pixels with scores > this threshold are considered text. -
      -
    • float: Any float > 0;
    • -
    • None: If None, defaults to pipeline initialization (0.3).
    • -
    -
    floatNone
    text_det_box_threshBox threshold for text detection. Detected regions with average scores > this threshold are retained. -
      -
    • float: Any float > 0;
    • -
    • None: If None, defaults to pipeline initialization (0.6).
    • -
    -
    floatNone
    text_det_unclip_ratioExpansion ratio for text detection. Larger values expand text regions more. -
      -
    • float: Any float > 0;
    • -
    • None: If None, defaults to pipeline initialization (2.0).
    • -
    -
    floatNone
    text_det_input_shapeInput shape for text detection.tupleNone
    text_rec_score_threshScore threshold for text recognition. Results with scores > this threshold are retained. -
      -
    • float: Any float > 0;
    • -
    • None: If None, defaults to pipeline initialization (0.0, no threshold).
    • -
    -
    floatNone
    text_rec_input_shapeInput shape for text recognition.tupleNone
    langSpecifies the OCR model language. -
      -
    • ch: Chinese;
    • -
    • en: English;
    • -
    • korean: Korean;
    • -
    • japan: Japanese;
    • -
    • chinese_cht: Traditional Chinese;
    • -
    • te: Telugu;
    • -
    • ka: Kannada;
    • -
    • ta: Tamil;
    • -
    • None: If None, defaults to ch.
    • -
    -
    strNone
    ocr_versionOCR model version. -
      -
    • PP-OCRv5: Uses PP-OCRv5 models;
    • -
    • PP-OCRv4: Uses PP-OCRv4 models;
    • -
    • PP-OCRv3: Uses PP-OCRv3 models;
    • -
    • None: If None, defaults to PP-OCRv5 models.
    • -
    -
    strNone
    deviceDevice for inference. Supports: -
      -
    • CPU: cpu;
    • -
    • GPU: gpu:0 (first GPU);
    • -
    • NPU: npu:0;
    • -
    • XPU: xpu:0;
    • -
    • MLU: mlu:0;
    • -
    • DCU: dcu:0;
    • -
    • None: If None, defaults to GPU 0 (if available) or CPU.
    • -
    -
    strNone
    enable_hpiWhether to enable high-performance inference.boolFalse
    use_tensorrtWhether to use TensorRT for acceleration.boolFalse
    min_subgraph_sizeMinimum subgraph size for model optimization.int3
    precisionComputation precision (e.g., fp32, fp16).strfp32
    enable_mkldnnWhether to enable MKL-DNN acceleration. If None, enabled by default.boolNone
    cpu_threadsNumber of CPU threads for inference.int8
    + + + + + + + + + - - + + - -
    ParameterParameter DescriptionParameter TypeDefault Value
    paddlex_configPath to PaddleX pipeline configuration file.doc_orientation_classify_model_nameName of the document orientation classification model. If set to None, the production line's default model will be used. str None
    -
    - -
    (2) Call the predict() method for inference. Alternatively, predict_iter() returns a generator for memory-efficient batch processing. Parameters: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ParameterDescriptionTypeDefault
    inputInput data (required). Supports: -
      -
    • Python Var: e.g., numpy.ndarray image data;
    • -
    • str: Local file path (e.g., /root/data/img.jpg), URL (e.g., example), or directory (e.g., /root/data/);
    • -
    • List: List of inputs, e.g., [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"].
    • -
    -
    Python Var|str|list
    deviceSame as initialization.strNone
    use_doc_orientation_classifyWhether to enable document orientation classification during inference.boolNone
    use_doc_unwarpingWhether to enable text image correction during inference.boolNone
    use_textline_orientationWhether to enable text line orientation classification during inference.boolNone
    text_det_limit_side_lenSame as initialization.intNone
    text_det_limit_typeSame as initialization.strNone
    text_det_threshSame as initialization.floatNone
    text_det_box_threshSame as initialization.floatNone
    text_det_unclip_ratioSame as initialization.floatNone
    doc_orientation_classify_model_dirDirectory path of the document orientation classification model. If set to None, the official model will be downloaded.strNone
    doc_unwarping_model_nameName of the text image unwarping model. If set to None, the production line's default model will be used.strNone
    doc_unwarping_model_dirDirectory path of the text image unwarping model. If set to None, the official model will be downloaded.strNone
    text_detection_model_nameName of the text detection model. If set to None, the production line's default model will be used.strNone
    text_detection_model_dirDirectory path of the text detection model. If set to None, the official model will be downloaded.strNone
    text_line_orientation_model_nameName of the text line orientation model. If set to None, the production line's default model will be used.strNone
    text_line_orientation_model_dirDirectory path of the text line orientation model. If set to None, the official model will be downloaded.strNone
    text_line_orientation_batch_sizeBatch size for the text line orientation model. If set to None, the default batch size will be 1.intNone
    text_recognition_model_nameName of the text recognition model. If set to None, the production line's default model will be used.strNone
    text_recognition_model_dirDirectory path of the text recognition model. If set to None, the official model will be downloaded.strNone
    text_recognition_batch_sizeBatch size for the text recognition model. If set to None, the default batch size will be 1.intNone
    use_doc_orientation_classifyWhether to use the document orientation classification function. If set to None, the production line's initialized value for this parameter (initialized to True) will be used.boolNone
    use_doc_unwarpingWhether to use the text image unwarping function. If set to None, the production line's initialized value for this parameter (initialized to True) will be used.boolNone
    use_textline_orientationWhether to use the text line orientation function. If set to None, the production line's initialized value for this parameter (initialized to True) will be used.boolNone
    text_det_limit_side_lenMaximum side length limit for text detection. +
      +
    • int: Any integer greater than 0;
    • +
    • None: If set to None, the production line's initialized value for this parameter (initialized to 960) will be used;
    • +
    +
    intNone
    text_det_limit_typeType of side length limit for text detection. +
      +
    • str: Supports min and max, where min means ensuring the shortest side of the image is not smaller than det_limit_side_len, and max means ensuring the longest side of the image is not larger than limit_side_len
    • +
    • None: If set to None, the production line's initialized value for this parameter (initialized to max) will be used;
    • +
    +
    strNone
    text_det_threshPixel threshold for text detection. Pixels with scores higher than this threshold in the output probability map will be considered text pixels. +
      +
    • float: Any floating-point number greater than 0 +
    • None: If set to None, the production line's initialized value for this parameter (0.3) will be used;
    +
    floatNone
    text_det_box_threshBox threshold for text detection. A detection result will be considered a text region if the average score of all pixels within the bounding box is higher than this threshold. +
      +
    • float: Any floating-point number greater than 0 +
    • None: If set to None, the production line's initialized value for this parameter (0.6) will be used;
    +
    floatNone
    text_det_unclip_ratioDilation coefficient for text detection. This method is used to dilate the text region, and the larger this value, the larger the dilated area. +
      +
    • float: Any floating-point number greater than 0 +
    • None: If set to None, the production line's initialized value for this parameter (2.0) will be used;
    +
    floatNone
    text_det_input_shapeInput shape for text detection.tupleNone
    text_rec_score_threshSame as initialization.Recognition score threshold for text. Text results with scores higher than this threshold will be retained. +
      +
    • float: Any floating-point number greater than 0 +
    • None: If set to None, the production line's initialized value for this parameter (0.0, i.e., no threshold) will be used;
    +
    floatNone
    text_rec_input_shapeInput shape for text recognition.tupleNone
    langOCR model language to use. +
      +
    • ch: Chinese;
    • +
    • en: English;
    • +
    • korean: Korean;
    • +
    • japan: Japanese;
    • +
    • chinese_cht: Traditional Chinese;
    • +
    • te: Telugu;
    • +
    • ka: Kannada;
    • +
    • ta: Tamil;
    • +
    • None: If set to None, ch will be used by default;
    • +
    +
    strNone
    ocr_versionOCR version. +
      +
    • PP-OCRv5: Use PP-OCRv5 series models;
    • +
    • PP-OCRv4: Use PP-OCRv4 series models;
    • +
    • PP-OCRv3: Use PP-OCRv3 series models;
    • +
    • None: If set to None, PP-OCRv5 series models will be used by default;
    • +
    +
    strNone
    deviceDevice for inference. Supports specifying a specific card number. +
      +
    • CPU: e.g., cpu for CPU inference;
    • +
    • GPU: e.g., gpu:0 for inference on the 1st GPU;
    • +
    • NPU: e.g., npu:0 for inference on the 1st NPU;
    • +
    • XPU: e.g., xpu:0 for inference on the 1st XPU;
    • +
    • MLU: e.g., mlu:0 for inference on the 1st MLU;
    • +
    • DCU: e.g., dcu:0 for inference on the 1st DCU;
    • +
    • None: If set to None, the production line's initialized value for this parameter will be used. During initialization, it will give priority to the local GPU 0 device; if not available, the CPU device will be used;
    • +
    +
    strNone
    enable_hpiWhether to enable high-performance inference.boolFalse
    use_tensorrtWhether to use TensorRT for inference acceleration.boolFalse
    min_subgraph_sizeMinimum subgraph size for optimizing subgraph computation.int3
    precisionComputational precision, such as fp32, fp16.strfp32
    enable_mkldnnWhether to enable the MKL-DNN acceleration library. If set to None, it will be enabled by default.boolNone
    cpu_threadsNumber of threads used for CPU inference.int8
    paddlex_configPath to the PaddleX production line configuration file.strNone
    +
    + +
    (2) Invoke the predict() method of the OCR production line object for inference prediction, which returns a results list. Additionally, the production line provides the predict_iter() method. Both methods are completely consistent in parameter acceptance and result return, except that predict_iter() returns a generator, which can process and obtain prediction results incrementally, suitable for handling large datasets or scenarios where memory saving is desired. You can choose to use either of these two methods according to actual needs. The following are the parameters and descriptions of the predict() method: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ParameterParameter DescriptionParameter TypeDefault Value
    inputData to be predicted, supporting multiple input types, required. +
      +
    • Python Var: Image data represented by numpy.ndarray
    • +
    • str: Local path of an image file or PDF file: /root/data/img.jpg; URL link, such as the network URL of an image file or PDF file: example; local directory, which needs to contain images to be predicted, such as the local path: /root/data/ (currently, predicting PDF files in the directory is not supported; PDF files need to specify the specific file path)
    • +
    • List: List elements must be of the above types, such as [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"]
    • +
    +
    Python Var|str|list
    deviceThe same as the parameter during instantiation.strNone
    use_doc_orientation_classifyWhether to use the document orientation classification module during inference.boolNone
    use_doc_unwarpingWhether to use the text image unwarping module during inference.boolNone
    use_textline_orientationWhether to use the text line orientation classification module during inference.boolNone
    text_det_limit_side_lenThe same as the parameter during instantiation.intNone
    text_det_limit_typeThe same as the parameter during instantiation.strNone
    text_det_threshThe same as the parameter during instantiation.floatNone
    text_det_box_threshThe same as the parameter during instantiation.floatNone
    text_det_unclip_ratioThe same as the parameter during instantiation.floatNone
    text_rec_score_threshThe same as the parameter during instantiation. float None
    -
    (3) Processing prediction results: Each sample's prediction result is a corresponding Result object, supporting printing, saving as images, and saving as json files: +
    (3) Process the prediction results. The prediction result of each sample is a corresponding Result object, which supports operations of printing, saving as an image, and saving as a json file: - + - - - + + + - + - + - + - + - + - - + + - + - + - + - - + +
    MethodDescriptionMethod Description ParameterTypeExplanationDefaultParameter TypeParameter DescriptionDefault Value
    print()Print results to terminalPrint the results to the terminal format_json boolWhether to format output with JSON indentationWhether to format the output content with JSON indentation True
    indent intIndentation level for prettifying JSON output (only when format_json=True)Specify the indentation level to beautify the output JSON data and make it more readable, only valid when format_json is True 4
    ensure_ascii boolWhether to escape non-ASCII characters to Unicode (only when format_json=True)Control whether to escape non-ASCII characters as Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters, only valid when format_json is True False
    save_to_json()Save results as JSON fileSave the results as a json-formatted file save_path strOutput file path (uses input filename when directory specified)NoneFile path to save. When it is a directory, the saved file name will be consistent with the input file type nameNo default
    indent intIndentation level for prettifying JSON output (only when format_json=True)Specify the indentation level to beautify the output JSON data and make it more readable, only valid when format_json is True 4
    ensure_ascii boolWhether to escape non-ASCII characters (only when format_json=True)Control whether to escape non-ASCII characters as Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters, only valid when format_json is True False
    save_to_img()Save results as image fileSave the results as an image-formatted file save_path strOutput path (supports directory or file path)NoneFile path to save, supporting directory or file pathNo default
    -- The print() method outputs results to terminal with the following structure: +- Calling the `print()` method will print the results to the terminal. The content printed to the terminal is explained as follows: - - input_path: (str) Input image path + - `input_path`: `(str)` Input path of the image to be predicted - - page_index: (Union[int, None]) PDF page number (if input is PDF), otherwise None + - `page_index`: `(Union[int, None])` If the input is a PDF file, it indicates which page of the PDF it is; otherwise, it is `None` - - model_settings: (Dict[str, bool]) Pipeline configuration - - use_doc_preprocessor: (bool) Whether document preprocessing is enabled - - use_textline_orientation: (bool) Whether text line orientation classification is enabled + - `model_settings`: `(Dict[str, bool])` Model parameters configured for the production line - - doc_preprocessor_res: (Dict[str, Union[str, Dict[str, bool], int]]) Document preprocessing results (only when use_doc_preprocessor=True) - - input_path: (Union[str, None]) Preprocessor input path (None for numpy.ndarray input) - - model_settings: (Dict) Preprocessor configuration - - use_doc_orientation_classify: (bool) Whether document orientation classification is enabled - - use_doc_unwarping: (bool) Whether text image correction is enabled - - angle: (int) Document orientation prediction (0-3 for 0°,90°,180°,270°; -1 if disabled) + - `use_doc_preprocessor`: `(bool)` Control whether to enable the document preprocessing sub-production line + - `use_textline_orientation`: `(bool)` Control whether to enable the text line orientation classification function - - dt_polys: (List[numpy.ndarray]) Text detection polygons (4 vertices per box, shape=(4,2), dtype=int16) + - `doc_preprocessor_res`: `(Dict[str, Union[str, Dict[str, bool], int]])` Output results of the document preprocessing sub-production line. Only exists when `use_doc_preprocessor=True` + - `input_path`: `(Union[str, None])` Image path accepted by the image preprocessing sub-production line. When the input is `numpy.ndarray`, it is saved as `None` + - `model_settings`: `(Dict)` Model configuration parameters of the preprocessing sub-production line + - `use_doc_orientation_classify`: `(bool)` Control whether to enable document orientation classification + - `use_doc_unwarping`: `(bool)` Control whether to enable text image unwarping + - `angle`: `(int)` Prediction result of document orientation classification. When enabled, the values are [0,1,2,3], corresponding to [0°,90°,180°,270°]; when disabled, it is -1 - - dt_scores: (List[float]) Text detection confidence scores + - `dt_polys`: `(List[numpy.ndarray])` List of text detection polygon boxes. Each detection box is represented by a numpy array of 4 vertex coordinates, with the array shape being (4, 2) and the data type being int16 - - text_det_params: (Dict[str, Dict[str, int, float]]) Text detection parameters - - limit_side_len: (int) Image side length limit - - limit_type: (str) Length limit handling method - - thresh: (float) Text pixel classification threshold - - box_thresh: (float) Detection box confidence threshold - - unclip_ratio: (float) Text region expansion ratio - - text_type: (str) Fixed as "general" + - `dt_scores`: `(List[float])` List of confidence scores for text detection boxes - - textline_orientation_angles: (List[int]) Text line orientation predictions (actual angles when enabled, [-1,-1,-1] when disabled) + - `text_det_params`: `(Dict[str, Dict[str, int, float]])` Configuration parameters for the text detection module + - `limit_side_len`: `(int)` Side length limit value during image preprocessing + - `limit_type`: `(str)` Processing method for side length limits + - `thresh`: `(float)` Confidence threshold for text pixel classification + - `box_thresh`: `(float)` Confidence threshold for text detection boxes + - `unclip_ratio`: `(float)` Dilation coefficient for text detection boxes + - `text_type`: `(str)` Type of text detection, currently fixed as "general" - - text_rec_score_thresh: (float) Text recognition score threshold + - `textline_orientation_angles`: `(List[int])` Prediction results of text line orientation classification. When enabled, actual angle values are returned (e.g., [0,0,1]); when disabled, [-1,-1,-1] is returned - - rec_texts: (List[str]) Recognized texts (filtered by text_rec_score_thresh) + - `text_rec_score_thresh`: `(float)` Filtering threshold for text recognition results - - rec_scores: (List[float]) Recognition confidence scores (filtered) + - `rec_texts`: `(List[str])` List of text recognition results, containing only texts with confidence scores exceeding `text_rec_score_thresh` - - rec_polys: (List[numpy.ndarray]) Filtered detection polygons (same format as dt_polys) + - `rec_scores`: `(List[float])` List of text recognition confidence scores, filtered by `text_rec_score_thresh` - - rec_boxes: (numpy.ndarray) Rectangular bounding boxes (shape=(n,4), dtype=int16) with [x_min, y_min, x_max, y_max] coordinates + - `rec_polys`: `(List[numpy.ndarray])` List of text detection boxes filtered by confidence, in the same format as `dt_polys` -- save_to_json() saves results to specified save_path: - - Directory: saves as save_path/{your_img_basename}_res.json - - File: saves directly to specified path - - Note: Converts numpy.array to lists since JSON doesn't support numpy arrays + - `rec_boxes`: `(numpy.ndarray)` Array of rectangular bounding boxes for detection boxes, with shape (n, 4) and dtype int16. Each row represents the [x_min, y_min, x_max, y_max] coordinates of a rectangular box, where (x_min, y_min) is the top-left coordinate and (x_max, y_max) is the bottom-right coordinate -- save_to_img() saves visualization results: - - Directory: saves as save_path/{your_img_basename}_ocr_res_img.{your_img_extension} - - File: saves directly (not recommended for multiple images to avoid overwriting) +- Calling the `save_to_json()` method will save the above content to the specified `save_path`. If a directory is specified, the save path will be `save_path/{your_img_basename}_res.json`. If a file is specified, it will be saved directly to that file. Since json files do not support saving numpy arrays, `numpy.array` types will be converted to list form. +- Calling the `save_to_img()` method will save the visualization results to the specified `save_path`. If a directory is specified, the save path will be `save_path/{your_img_basename}_ocr_res_img.{your_img_extension}`. If a file is specified, it will be saved directly to that file. (The production line usually generates many result images, so it is not recommended to directly specify a specific file path, as multiple images will be overwritten, leaving only the last one.) -* Additionally, results with visualizations and predictions can be obtained through the following attributes: +* Additionally, you can also obtain the visualized image with results and prediction results through attributes, as follows: - + - + - +
    AttributeDescriptionAttribute Description
    jsonRetrieves prediction results in json formatGet the prediction results in json format
    imgRetrieves visualized images in dict formatGet the visualized image in dict format
    -- The `json` attribute returns prediction results as a dict, with content identical to what's saved by the `save_to_json()` method. -- The `img` attribute returns prediction results as a dictionary containing two `Image.Image` objects under keys `ocr_res_img` (OCR result visualization) and `preprocessed_img` (preprocessing visualization). If the image preprocessing submodule isn't used, only `ocr_res_img` will be present. +- The prediction results obtained by the `json` attribute are in dict format, and the content is consistent with that saved by calling the `save_to_json()` method. +- The `img` attribute returns a dictionary-type result. The keys are `ocr_res_img` and `preprocessed_img`, with corresponding values being two `Image.Image` objects: one for displaying the visualized image of OCR results and the other for displaying the visualized image of image preprocessing. If the image preprocessing submodule is not used, only `ocr_res_img` will be included in the dictionary.
    @@ -1625,11 +1697,9 @@ for i, res in enumerate(result["ocrResults"]):
    ## 4. Custom Development - If the default model weights provided by the General OCR Pipeline do not meet your expectations in terms of accuracy or speed for your specific scenario, you can leverage your own domain-specific or application-specific data to further fine-tune the existing models, thereby improving the recognition performance of the General OCR Pipeline in your use case. ### 4.1 Model Fine-Tuning - The general OCR pipeline consists of multiple modules. If the pipeline's performance does not meet expectations, the issue may stem from any of these modules. You can analyze poorly recognized images to identify the problematic module and refer to the corresponding fine-tuning tutorials in the table below for adjustments. diff --git a/docs/version3.x/pipeline_usage/OCR.md b/docs/version3.x/pipeline_usage/OCR.md index 9a7d0746c5..97fa3e365c 100644 --- a/docs/version3.x/pipeline_usage/OCR.md +++ b/docs/version3.x/pipeline_usage/OCR.md @@ -528,7 +528,7 @@ paddleocr ocr -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_im --device gpu:0 # 通过 --ocr_version 指定 PP-OCR 其他版本 -paddleocr ocr -i ./general_ocr_002.png --ocr_version PP-OCRv5 +paddleocr ocr -i ./general_ocr_002.png --ocr_version PP-OCRv4 ```
    命令行支持更多参数设置,点击展开以查看命令行参数的详细说明 @@ -1203,8 +1203,7 @@ for res in result:
    - + @@ -1405,8 +1404,7 @@ for res in result: - `rec_polys`: `(List[numpy.ndarray])` 经过置信度过滤的文本检测框列表,格式同`dt_polys` - - `rec_boxes`: `(numpy.ndarray)` 检测框的矩形边界框数组,shape为(n, 4),dtype为int16。每一行表示一个矩形框的[x_min, y_min, x_max, y_max]坐标 - ,其中(x_min, y_min)为左上角坐标,(x_max, y_max)为右下角坐标 + - `rec_boxes`: `(numpy.ndarray)` 检测框的矩形边界框数组,shape为(n, 4),dtype为int16。每一行表示一个矩形框的[x_min, y_min, x_max, y_max]坐标,其中(x_min, y_min)为左上角坐标,(x_max, y_max)为右下角坐标 - 调用`save_to_json()` 方法会将上述内容保存到指定的`save_path`中,如果指定为目录,则保存的路径为`save_path/{your_img_basename}_res.json`,如果指定为文件,则直接保存到该文件中。由于json文件不支持保存numpy数组,因此会将其中的`numpy.array`类型转换为列表形式。 - 调用`save_to_img()` 方法会将可视化结果保存到指定的`save_path`中,如果指定为目录,则保存的路径为`save_path/{your_img_basename}_ocr_res_img.{your_img_extension}`,如果指定为文件,则直接保存到该文件中。(产线通常包含较多结果图片,不建议直接指定为具体的文件路径,否则多张图会被覆盖,仅保留最后一张图)
    enable_mkldnn是否启用 MKL-DNN 加速库。如果设置为None, 将默认启用。 -是否启用 MKL-DNN 加速库。如果设置为None, 将默认启用。 bool None