Update README.md
parent
9dd4c1b609
commit
44c3234424
|
@ -1,10 +1,10 @@
|
|||
# OCR Pipeline WebService
|
||||
# PaddleClas Pipeline WebService
|
||||
|
||||
(English|[简体中文](./README_CN.md))
|
||||
|
||||
PaddleOCR provides two service deployment methods:
|
||||
PaddleClas provides two service deployment methods:
|
||||
- Based on **PaddleHub Serving**: Code path is "`./deploy/hubserving`". Please refer to the [tutorial](../../deploy/hubserving/readme_en.md)
|
||||
- Based on **PaddleServing**: Code path is "`./deploy/pdserving`". Please follow this tutorial.
|
||||
- Based on **PaddleServing**: Code path is "`./deploy/paddleserving`". Please follow this tutorial.
|
||||
|
||||
# Service deployment based on PaddleServing
|
||||
|
||||
|
@ -27,11 +27,10 @@ The introduction and tutorial of Paddle Serving service deployment framework ref
|
|||
<a name="environmental-preparation"></a>
|
||||
## Environmental preparation
|
||||
|
||||
PaddleOCR operating environment and Paddle Serving operating environment are needed.
|
||||
|
||||
1. Please prepare PaddleOCR operating environment reference [link](../../doc/doc_ch/installation.md).
|
||||
Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.0.1.
|
||||
PaddleClas operating environment and PaddleServing operating environment are needed.
|
||||
|
||||
1. Please prepare PaddleClas operating environment reference [link](../../docs/zh_CN/tutorials/install.md).
|
||||
Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.1.0.
|
||||
|
||||
2. The steps of PaddleServing operating environment prepare are as follows:
|
||||
|
||||
|
@ -65,71 +64,80 @@ PaddleOCR operating environment and Paddle Serving operating environment are nee
|
|||
## Model conversion
|
||||
When using PaddleServing for service deployment, you need to convert the saved inference model into a serving model that is easy to deploy.
|
||||
|
||||
Firstly, download the [inference model](https://github.com/PaddlePaddle/PaddleOCR#pp-ocr-20-series-model-listupdate-on-dec-15) of PPOCR
|
||||
Firstly, download the inference model of ResNet50_vd
|
||||
```
|
||||
# Download and unzip the ResNet50_vd model
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
|
||||
```
|
||||
# Download and unzip the OCR text detection model
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar && tar xf ch_ppocr_mobile_v2.0_det_infer.tar
|
||||
# Download and unzip the OCR text recognition model
|
||||
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar && tar xf ch_ppocr_mobile_v2.0_rec_infer.tar
|
||||
|
||||
```
|
||||
Then, you can use installed paddle_serving_client tool to convert inference model to mobile model.
|
||||
```
|
||||
# Detection model conversion
|
||||
python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_mobile_v2.0_det_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./ppocr_det_mobile_2.0_serving/ \
|
||||
--serving_client ./ppocr_det_mobile_2.0_client/
|
||||
|
||||
# Recognition model conversion
|
||||
python3 -m paddle_serving_client.convert --dirname ./ch_ppocr_mobile_v2.0_rec_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./ppocr_rec_mobile_2.0_serving/ \
|
||||
--serving_client ./ppocr_rec_mobile_2.0_client/
|
||||
|
||||
# ResNet50_vd model conversion
|
||||
python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./ResNet50_vd_serving/ \
|
||||
--serving_client ./ResNet50_vd_client/
|
||||
```
|
||||
|
||||
After the detection model is converted, there will be additional folders of `ppocr_det_mobile_2.0_serving` and `ppocr_det_mobile_2.0_client` in the current folder, with the following format:
|
||||
After the ResNet50_vd inference model is converted, there will be additional folders of `ResNet50_vd_serving` and `ResNet50_vd_client` in the current folder, with the following format:
|
||||
```
|
||||
|- ppocr_det_mobile_2.0_serving/
|
||||
|- __model__
|
||||
|- __params__
|
||||
|- serving_server_conf.prototxt
|
||||
|- serving_server_conf.stream.prototxt
|
||||
|
||||
|- ppocr_det_mobile_2.0_client
|
||||
|- serving_client_conf.prototxt
|
||||
|- serving_client_conf.stream.prototxt
|
||||
|- ResNet50_vd_client/
|
||||
|- __model__
|
||||
|- __params__
|
||||
|- serving_server_conf.prototxt
|
||||
|- serving_server_conf.stream.prototxt
|
||||
|
||||
|- ResNet50_vd_client
|
||||
|- serving_client_conf.prototxt
|
||||
|- serving_client_conf.stream.prototxt
|
||||
```
|
||||
|
||||
Once you have the deploy model file, you need to change the alias name in serving_server_conf.prototxt: Change 'alias_name' in 'feed_var' to 'image', change 'alias_name' in 'fetch_var' to 'prediction',
|
||||
The modified serving_server_conf.prototxt file is as follows:
|
||||
```
|
||||
feed_var {
|
||||
name: "inputs"
|
||||
alias_name: "image"
|
||||
is_lod_tensor: false
|
||||
feed_type: 1
|
||||
shape: 3
|
||||
shape: 224
|
||||
shape: 224
|
||||
}
|
||||
fetch_var {
|
||||
name: "save_infer_model/scale_0.tmp_1"
|
||||
alias_name: "prediction"
|
||||
is_lod_tensor: true
|
||||
fetch_type: 1
|
||||
shape: -1
|
||||
}
|
||||
```
|
||||
The recognition model is the same.
|
||||
|
||||
<a name="paddle-serving-pipeline-deployment"></a>
|
||||
## Paddle Serving pipeline deployment
|
||||
|
||||
1. Download the PaddleOCR code, if you have already downloaded it, you can skip this step.
|
||||
1. Download the PaddleClas code, if you have already downloaded it, you can skip this step.
|
||||
```
|
||||
git clone https://github.com/PaddlePaddle/PaddleOCR
|
||||
git clone https://github.com/PaddlePaddle/PaddleClas
|
||||
|
||||
# Enter the working directory
|
||||
cd PaddleOCR/deploy/pdserver/
|
||||
cd PaddleClas/deploy/paddleserving/
|
||||
```
|
||||
|
||||
The pdserver directory contains the code to start the pipeline service and send prediction requests, including:
|
||||
The paddleserving directory contains the code to start the pipeline service and send prediction requests, including:
|
||||
```
|
||||
__init__.py
|
||||
config.yml # Start the service configuration file
|
||||
ocr_reader.py # OCR model pre-processing and post-processing code implementation
|
||||
pipeline_http_client.py # Script to send pipeline prediction request
|
||||
web_service.py # Start the script of the pipeline server
|
||||
config.yml # configuration file of starting the service
|
||||
pipeline_http_client.py # script to send pipeline prediction request by http
|
||||
pipeline_rpc_client.py # script to send pipeline prediction request by rpc
|
||||
resnet50_web_service.py # start the script of the pipeline server
|
||||
```
|
||||
|
||||
2. Run the following command to start the service.
|
||||
```
|
||||
# Start the service and save the running log in log.txt
|
||||
python3 web_service.py &>log.txt &
|
||||
python3 resnet50_web_service.py &>log.txt &
|
||||
```
|
||||
After the service is successfully started, a log similar to the following will be printed in log.txt
|
||||

|
||||
|
@ -156,74 +164,6 @@ The recognition model is the same.
|
|||
|
||||
The predicted performance data will be automatically written into the `PipelineServingLogs/pipeline.tracer` file.
|
||||
|
||||
Tested on 200 real pictures, and limited the detection long side to 960. The average QPS on T4 GPU can reach around 23:
|
||||
|
||||
```
|
||||
|
||||
2021-05-13 03:42:36,895 ==================== TRACER ======================
|
||||
2021-05-13 03:42:36,975 Op(rec):
|
||||
2021-05-13 03:42:36,976 in[14.472382882882883 ms]
|
||||
2021-05-13 03:42:36,976 prep[9.556855855855856 ms]
|
||||
2021-05-13 03:42:36,976 midp[59.921905405405404 ms]
|
||||
2021-05-13 03:42:36,976 postp[15.345945945945946 ms]
|
||||
2021-05-13 03:42:36,976 out[1.9921216216216215 ms]
|
||||
2021-05-13 03:42:36,976 idle[0.16254943864471572]
|
||||
2021-05-13 03:42:36,976 Op(det):
|
||||
2021-05-13 03:42:36,976 in[315.4468035714286 ms]
|
||||
2021-05-13 03:42:36,976 prep[69.5980625 ms]
|
||||
2021-05-13 03:42:36,976 midp[18.989535714285715 ms]
|
||||
2021-05-13 03:42:36,976 postp[18.857803571428573 ms]
|
||||
2021-05-13 03:42:36,977 out[3.1337544642857145 ms]
|
||||
2021-05-13 03:42:36,977 idle[0.7477961159203756]
|
||||
2021-05-13 03:42:36,977 DAGExecutor:
|
||||
2021-05-13 03:42:36,977 Query count[224]
|
||||
2021-05-13 03:42:36,977 QPS[22.4 q/s]
|
||||
2021-05-13 03:42:36,977 Succ[0.9910714285714286]
|
||||
2021-05-13 03:42:36,977 Error req[169, 170]
|
||||
2021-05-13 03:42:36,977 Latency:
|
||||
2021-05-13 03:42:36,977 ave[535.1678348214285 ms]
|
||||
2021-05-13 03:42:36,977 .50[172.651 ms]
|
||||
2021-05-13 03:42:36,977 .60[187.904 ms]
|
||||
2021-05-13 03:42:36,977 .70[245.675 ms]
|
||||
2021-05-13 03:42:36,977 .80[526.684 ms]
|
||||
2021-05-13 03:42:36,977 .90[854.596 ms]
|
||||
2021-05-13 03:42:36,977 .95[1722.728 ms]
|
||||
2021-05-13 03:42:36,977 .99[3990.292 ms]
|
||||
2021-05-13 03:42:36,978 Channel (server worker num[10]):
|
||||
2021-05-13 03:42:36,978 chl0(In: ['@DAGExecutor'], Out: ['det']) size[0/0]
|
||||
2021-05-13 03:42:36,979 chl1(In: ['det'], Out: ['rec']) size[6/0]
|
||||
2021-05-13 03:42:36,979 chl2(In: ['rec'], Out: ['@DAGExecutor']) size[0/0]
|
||||
```
|
||||
|
||||
## WINDOWS Users
|
||||
|
||||
Windows does not support Pipeline Serving, if we want to lauch paddle serving on Windows, we should use Web Service, for more infomation please refer to [Paddle Serving for Windows Users](https://github.com/PaddlePaddle/Serving/blob/develop/doc/WINDOWS_TUTORIAL.md)
|
||||
|
||||
|
||||
**WINDOWS user can only use version 0.5.0 CPU Mode**
|
||||
|
||||
**Prepare Stage:**
|
||||
|
||||
```
|
||||
pip3 install paddle-serving-server==0.5.0
|
||||
pip3 install paddle-serving-app==0.3.1
|
||||
```
|
||||
|
||||
1. Start Server
|
||||
|
||||
```
|
||||
cd win
|
||||
python3 ocr_web_server.py gpu(for gpu user)
|
||||
or
|
||||
python3 ocr_web_server.py cpu(for cpu user)
|
||||
```
|
||||
|
||||
2. Client Send Requests
|
||||
|
||||
```
|
||||
python3 ocr_web_client.py
|
||||
```
|
||||
|
||||
<a name="faq"></a>
|
||||
## FAQ
|
||||
**Q1**: No result return after sending the request.
|
||||
|
|
Loading…
Reference in New Issue