docs: fix typos and translation (#14727)

pull/14887/head
Jan 2025-03-18 01:14:06 +01:00 committed by GitHub
parent 9bd0177cff
commit 5e36dc9983
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
6 changed files with 7 additions and 7 deletions

View File

@ -65,7 +65,7 @@ Full documentation can be found on [docs](https://paddlepaddle.github.io/PaddleO
## 🌟 Features
PaddleOCR support a variety of cutting-edge algorithms related to OCR, and developed industrial featured models/solution [PP-OCR](https://paddlepaddle.github.io/PaddleOCR/latest/en/ppocr/overview.html) [PP-Structure](https://paddlepaddle.github.io/PaddleOCR/latest/en/ppstructure/overview.html) and [PP-ChatOCR](https://aistudio.baidu.com/aistudio/projectdetail/6488689) on this basis, and get through the whole process of data production, model training, compression, inference and deployment.
PaddleOCR support a variety of cutting-edge algorithms related to OCR, and developed industrial featured models/solution [PP-OCR](https://paddlepaddle.github.io/PaddleOCR/latest/en/ppocr/overview.html), [PP-Structure](https://paddlepaddle.github.io/PaddleOCR/latest/en/ppstructure/overview.html) and [PP-ChatOCR](https://aistudio.baidu.com/aistudio/projectdetail/6488689) on this basis, and get through the whole process of data production, model training, compression, inference and deployment.
<div align="center">
<img src="./docs/images/ppocrv4_en.jpg">

View File

@ -102,7 +102,7 @@ The Python code of PaddleOCR follows [PEP8 Specification]( https://www.python.or
> paddleocr --image_dir ./imgs/11.jpg --use_angle_cls true
> ```
- Variable Rrferences: If code variables or command parameters are referenced in line, they need to be represented in line code, for example, above `--use_angle_cls true` with one space in front and one space in back
- Variable References: If code variables or command parameters are referenced in line, they need to be represented in line code, for example, above `--use_angle_cls true` with one space in front and one space in back
- Uniform naming: e.g. PP-OCRv2, PP-OCR mobile, `paddleocr` whl package, PPOCRLabel, Paddle Lite, etc.

View File

@ -13,7 +13,7 @@ hide:
Run OCR demo in browser refer to [tutorial](https://github.com/PaddlePaddle/FastDeploy/blob/cd0ee79c91d4ed1103abdc65ff12ccadd23d0827/examples/application/js/WebDemo.md).
|demo|web demo dicrctory|visualization|
|demo|web demo directory|visualization|
|-|-|-|
|PP-OCRv3|[TextDetection、TextRecognition](https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/application/js/web_demo/src/pages/cv/ocr/)|![](./images/196874354-1b5eecb0-f273-403c-aa6c-4463bf6d78db.png)|

View File

@ -142,7 +142,7 @@ In PaddleOCR, the network is divided into four stages: Transform, Backbone, Neck
## 3. Multilingual Config File Generation
PaddleOCR currently supports recognition for 80 languages (besides Chinese). A multi-language configuration file template is
provided under the path `configs/rec/multi_languages`: [rec_multi_language_lite_train.yml](https://github.com/PaddlePaddle/PaddleOCR/tree/main/configs/rec/multi_language/rec_multi_language_lite_train.yml)
provided under the path `configs/rec/multi_languages`: [rec_multi_language_lite_train.yml](https://github.com/PaddlePaddle/PaddleOCR/tree/main/configs/rec/multi_language/rec_multi_language_lite_train.yml).
There are two ways to create the required configuration file:
@ -237,7 +237,7 @@ Currently, the multi-language algorithms supported by PaddleOCR are:
| rec_cyrillic_lite_train.yml | CRNN | Mobilenet_v3 small 0.5 | None | BiLSTM | ctc | cyrillic |
| rec_devanagari_lite_train.yml | CRNN | Mobilenet_v3 small 0.5 | None | BiLSTM | ctc | devanagari |
For more supported languages, please refer to : [Multi-language model](./multi_languages.en.md)
For more supported languages, please refer to: [Multi-language model](./multi_languages.en.md)
The multi-language model training method is the same as the Chinese model. The training data set is 100w synthetic data. A small amount of fonts and test data can be downloaded using the following two methods.

View File

@ -7,7 +7,7 @@ comments: true
## 1 Get started quickly
### 1.1 install package
### 1.1 Install package
install by pypi

View File

@ -11,7 +11,7 @@ comments: true
Key information extraction (KIE) refers to extracting key information from text or images. As the downstream task of OCR, KIE of document image has many practical application scenarios, such as form recognition, ticket information extraction, ID card information extraction, etc. However, it is time-consuming and laborious to extract key information from these document images by manpower. It's challengable but also valuable to combine multi-modal features (visual, layout, text, etc) together and complete KIE tasks.
For the document images in a specific scene, the position and layout of the key information are relatively fixed. Therefore, in the early stage of the research, there are many methods based on template matching to extract the key information. This method is still widely used in many simple scenarios at present. However, it takes long time to adjut the template for different scenarios.
For the document images in a specific scene, the position and layout of the key information are relatively fixed. Therefore, in the early stage of the research, there are many methods based on template matching to extract the key information. This method is still widely used in many simple scenarios at present. However, it takes long time to adjust the template for different scenarios.
The KIE in the document image generally contains 2 subtasks, which is as shown follows.