mirror of
https://github.com/PaddlePaddle/PaddleOCR.git
synced 2025-06-03 21:53:39 +08:00
add jeston
This commit is contained in:
parent
21197ed26c
commit
00be306224
72
deploy/jeston/readme.md
Normal file
72
deploy/jeston/readme.md
Normal file
@ -0,0 +1,72 @@
|
||||
|
||||
# Jeston
|
||||
|
||||
本节介绍PaddleOCR在Jeston NX、TX2、nano、AGX等系列硬件的部署。
|
||||
|
||||
|
||||
## 1. 环境准备
|
||||
|
||||
需要准备一台Jeston开发板,如果需要TensorRT预测,需准备好TensorRT环境,建议使用7.1.3版本的TensorRT;
|
||||
|
||||
1. jeston安装paddlepaddle
|
||||
|
||||
paddlepaddle下载[链接](https://www.paddlepaddle.org.cn/inference/user_guides/download_lib.html#python)
|
||||
请选择适合的您Jetpack版本、cuda版本、trt版本的安装包。
|
||||
|
||||
安装命令:
|
||||
```shell
|
||||
pip3 install -U paddlepaddle_gpu-*-cp36-cp36m-linux_aarch64.whl
|
||||
```
|
||||
|
||||
|
||||
2. 下载PaddleOCR代码并安装依赖
|
||||
|
||||
首先 clone PaddleOCR 代码:
|
||||
```
|
||||
git clone https://github.com/PaddlePaddle/PaddleOCR
|
||||
```
|
||||
|
||||
其次,安装依赖:
|
||||
```
|
||||
cd PaddleOCR
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
*注:jeston硬件CPU较差,依赖安装较慢,请耐心等待*
|
||||
|
||||
|
||||
## 2. 执行预测
|
||||
|
||||
从[文档](../../doc/doc_ch/ppocr_introduction.md) 模型库中获取PPOCR模型,下面以PP-OCRv3模型为例,介绍在PPOCR模型在jeston上的使用方式:
|
||||
|
||||
下载并解压PP-OCRv3模型
|
||||
```
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
tar xf ch_PP-OCRv3_det_infer.tar
|
||||
tar xf ch_PP-OCRv3_rec_infer.tar
|
||||
```
|
||||
|
||||
执行文本检测预测:
|
||||
```
|
||||
cd PaddleOCR
|
||||
python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/ --use_gpu=True
|
||||
```
|
||||
|
||||
执行文本识别预测:
|
||||
```
|
||||
python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/ch/ --use_gpu=True
|
||||
```
|
||||
|
||||
执行文本检测+文本识别串联预测:
|
||||
|
||||
```
|
||||
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True
|
||||
```
|
||||
|
||||
开启TRT预测只需要在以上命令基础上设置`--use_tensorrt=True`即可:
|
||||
```
|
||||
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True --use_tensorrt=True
|
||||
```
|
||||
|
||||
更多ppocr模型预测请参考[文档](../../doc/doc_ch/inference_ppocr.md)
|
71
deploy/jeston/readme_en.md
Normal file
71
deploy/jeston/readme_en.md
Normal file
@ -0,0 +1,71 @@
|
||||
|
||||
# Jeston
|
||||
|
||||
This section introduces the deployment of PaddleOCR on Jeston NX, TX2, nano, AGX and other series of hardware.
|
||||
|
||||
|
||||
## 1. 环境准备
|
||||
|
||||
You need to prepare a Jeston development hardware. If you need TensorRT, you need to prepare the TensorRT environment. It is recommended to use TensorRT version 7.1.3;
|
||||
|
||||
1. jeston install paddlepaddle
|
||||
|
||||
paddlepaddle download [link](https://www.paddlepaddle.org.cn/inference/user_guides/download_lib.html#python)
|
||||
Please select the appropriate installation package for your Jetpack version, cuda version, and trt version.
|
||||
|
||||
Install paddlepaddle:
|
||||
```shell
|
||||
pip3 install -U paddlepaddle_gpu-*-cp36-cp36m-linux_aarch64.whl
|
||||
```
|
||||
|
||||
|
||||
2. Download PaddleOCR code and install dependencies
|
||||
|
||||
Clone the PaddleOCR code:
|
||||
```
|
||||
git clone https://github.com/PaddlePaddle/PaddleOCR
|
||||
```
|
||||
|
||||
and install dependencies:
|
||||
```
|
||||
cd PaddleOCR
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
*Note: Jeston hardware CPU is poor, dependency installation is slow, please wait patiently*
|
||||
|
||||
## 2. Perform prediction
|
||||
|
||||
Obtain the PPOCR model from the [document](../../doc/doc_en/ppocr_introduction_en.md) model library. The following takes the PP-OCRv3 model as an example to introduce the use of the PPOCR model on jeston:
|
||||
|
||||
Download and unzip the PP-OCRv3 models.
|
||||
```
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar
|
||||
wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar
|
||||
tar xf ch_PP-OCRv3_det_infer.tar
|
||||
tar xf ch_PP-OCRv3_rec_infer.tar
|
||||
```
|
||||
|
||||
The text detection inference:
|
||||
```
|
||||
cd PaddleOCR
|
||||
python3 tools/infer/predict_det.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --image_dir=./doc/imgs/ --use_gpu=True
|
||||
```
|
||||
|
||||
The text recognition inference:
|
||||
```
|
||||
python3 tools/infer/predict_det.py --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs_words/ch/ --use_gpu=True
|
||||
```
|
||||
|
||||
The text detection and text recognition inference:
|
||||
|
||||
```
|
||||
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True
|
||||
```
|
||||
|
||||
To enable TRT prediction, you only need to set `--use_tensorrt=True` on the basis of the above command:
|
||||
```
|
||||
python3 tools/infer/predict_system.py --det_model_dir=./inference/ch_PP-OCRv2_det_infer/ --rec_model_dir=./inference/ch_PP-OCRv2_rec_infer/ --image_dir=./doc/imgs/ --use_gpu=True --use_tensorrt=True
|
||||
```
|
||||
|
||||
For more ppocr model predictions, please refer to[document](../../doc/doc_en/inference_ppocr_en.md)
|
Loading…
x
Reference in New Issue
Block a user