update develop serving docs into 2.4
parent
f5265a1f2a
commit
f34b7fe374
|
@ -54,139 +54,208 @@ pip install paddle-serving-server
|
|||
## 3. Service Deployment for Image Classification
|
||||
|
||||
<a name="3.1"></a>
|
||||
### 3.1 Model Transformation
|
||||
## 3. Image Classification Service Deployment
|
||||
|
||||
When adopting PaddleServing for service deployment, the saved inference model needs to be converted to a Serving model. The following part takes the classic ResNet50_vd model as an example to introduce the deployment of image classification service.
|
||||
The following takes the classic ResNet50_vd model as an example to introduce how to deploy the image classification service.
|
||||
|
||||
- Enter the working directory:
|
||||
<a name="3.1"></a>
|
||||
### 3.1 Model conversion
|
||||
|
||||
```
|
||||
cd deploy/paddleserving
|
||||
```
|
||||
When using PaddleServing for service deployment, you need to convert the saved inference model into a Serving model.
|
||||
- Go to the working directory:
|
||||
```shell
|
||||
cd deploy/paddleserving
|
||||
```
|
||||
- Download and unzip the inference model for ResNet50_vd:
|
||||
```shell
|
||||
# Download ResNet50_vd inference model
|
||||
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar
|
||||
# Decompress the ResNet50_vd inference model
|
||||
tar xf ResNet50_vd_infer.tar
|
||||
```
|
||||
- Use the paddle_serving_client command to convert the downloaded inference model into a model format for easy server deployment:
|
||||
```shell
|
||||
# Convert ResNet50_vd model
|
||||
python3.7 -m paddle_serving_client.convert \
|
||||
--dirname ./ResNet50_vd_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./ResNet50_vd_serving/ \
|
||||
--serving_client ./ResNet50_vd_client/
|
||||
```
|
||||
The specific meaning of the parameters in the above command is shown in the following table
|
||||
| parameter | type | default value | description |
|
||||
| --------- | ---- | ------------- | ----------- | |--- |
|
||||
| `dirname` | str | - | The storage path of the model file to be converted. The program structure file and parameter file are saved in this directory. |
|
||||
| `model_filename` | str | None | The name of the file storing the model Inference Program structure that needs to be converted. If set to None, use `__model__` as the default filename |
|
||||
| `params_filename` | str | None | File name where all parameters of the model to be converted are stored. It needs to be specified if and only if all model parameters are stored in a single binary file. If the model parameters are stored in separate files, set it to None |
|
||||
| `serving_server` | str | `"serving_server"` | The storage path of the converted model files and configuration files. Default is serving_server |
|
||||
| `serving_client` | str | `"serving_client"` | The converted client configuration file storage path. Default is serving_client |
|
||||
|
||||
- Download the inference model of ResNet50_vd:
|
||||
|
||||
```
|
||||
# Download and decompress the ResNet50_vd model
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
|
||||
```
|
||||
|
||||
- Convert the downloaded inference model into a format that is readily deployable by Server with the help of paddle_serving_client.
|
||||
|
||||
```
|
||||
# Convert the ResNet50_vd model
|
||||
python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./ResNet50_vd_serving/ \
|
||||
--serving_client ./ResNet50_vd_client/
|
||||
```
|
||||
|
||||
After the transformation, `ResNet50_vd_serving` and `ResNet50_vd_client` will be added to the current folder in the following format:
|
||||
|
||||
```
|
||||
|- ResNet50_vd_server/
|
||||
|- __model__
|
||||
|- __params__
|
||||
|- serving_server_conf.prototxt
|
||||
|- serving_server_conf.stream.prototxt
|
||||
|- ResNet50_vd_client
|
||||
|- serving_client_conf.prototxt
|
||||
|- serving_client_conf.stream.prototxt
|
||||
```
|
||||
|
||||
Having obtained the model file, modify the alias name in `serving_server_conf.prototxt` under directory `ResNet50_vd_server` by changing `alias_name` in `fetch_var` to `prediction`.
|
||||
|
||||
**Notes**: Serving supports input and output renaming to ensure its compatibility with the deployment of different models. In this case, modifying the alias_name of the configuration file is the only step needed to complete the inference and deployment of all kinds of models. The modified serving_server_conf.prototxt is shown below:
|
||||
|
||||
```
|
||||
feed_var {
|
||||
name: "inputs"
|
||||
alias_name: "inputs"
|
||||
is_lod_tensor: false
|
||||
feed_type: 1
|
||||
shape: 3
|
||||
shape: 224
|
||||
shape: 224
|
||||
}
|
||||
fetch_var {
|
||||
name: "save_infer_model/scale_0.tmp_1"
|
||||
alias_name: "prediction"
|
||||
is_lod_tensor: true
|
||||
fetch_type: 1
|
||||
shape: -1
|
||||
}
|
||||
```
|
||||
After the ResNet50_vd inference model conversion is completed, there will be additional `ResNet50_vd_serving` and `ResNet50_vd_client` folders in the current folder, with the following structure:
|
||||
```shell
|
||||
├── ResNet50_vd_serving/
|
||||
│ ├── inference.pdiparams
|
||||
│ ├── inference.pdmodel
|
||||
│ ├── serving_server_conf.prototxt
|
||||
│ └── serving_server_conf.stream.prototxt
|
||||
│
|
||||
└── ResNet50_vd_client/
|
||||
├── serving_client_conf.prototxt
|
||||
└── serving_client_conf.stream.prototxt
|
||||
```
|
||||
|
||||
- Serving provides the function of input and output renaming in order to be compatible with the deployment of different models. When different models are deployed in inference, you only need to modify the `alias_name` of the configuration file, and the inference deployment can be completed without modifying the code. Therefore, after the conversion, you need to modify the alias names in the files `serving_server_conf.prototxt` under `ResNet50_vd_serving` and `ResNet50_vd_client` respectively, and change the `alias_name` in `fetch_var` to `prediction`, the modified serving_server_conf.prototxt is as follows Show:
|
||||
```log
|
||||
feed_var {
|
||||
name: "inputs"
|
||||
alias_name: "inputs"
|
||||
is_lod_tensor: false
|
||||
feed_type: 1
|
||||
shape: 3
|
||||
shape: 224
|
||||
shape: 224
|
||||
}
|
||||
fetch_var {
|
||||
name: "save_infer_model/scale_0.tmp_1"
|
||||
alias_name: "prediction"
|
||||
is_lod_tensor: false
|
||||
fetch_type: 1
|
||||
shape: 1000
|
||||
}
|
||||
```
|
||||
<a name="3.2"></a>
|
||||
### 3.2 Service Deployment and Request
|
||||
### 3.2 Service deployment and request
|
||||
|
||||
Paddleserving's directory contains the code to start the pipeline service and send prediction requests, including:
|
||||
|
||||
```
|
||||
The paddleserving directory contains the code for starting the pipeline service, the C++ serving service and sending the prediction request, mainly including:
|
||||
```shell
|
||||
__init__.py
|
||||
config.yml # Configuration file for starting the service
|
||||
pipeline_http_client.py # Script for sending pipeline prediction requests by http
|
||||
pipeline_rpc_client.py # Script for sending pipeline prediction requests by rpc
|
||||
classification_web_service.py # Script for starting the pipeline server
|
||||
classification_web_service.py # Script to start the pipeline server
|
||||
config.yml # Configuration file to start the pipeline service
|
||||
pipeline_http_client.py # Script for sending pipeline prediction requests in http mode
|
||||
pipeline_rpc_client.py # Script for sending pipeline prediction requests in rpc mode
|
||||
readme.md # Classification model service deployment document
|
||||
run_cpp_serving.sh # Start the C++ Serving departmentscript
|
||||
test_cpp_serving_client.py # Script for sending C++ serving prediction requests in rpc mode
|
||||
```
|
||||
<a name="3.2.1"></a>
|
||||
#### 3.2.1 Python Serving
|
||||
|
||||
- Start the service:
|
||||
- Start the service:
|
||||
```shell
|
||||
# Start the service and save the running log in log.txt
|
||||
python3.7 classification_web_service.py &>log.txt &
|
||||
```
|
||||
|
||||
```
|
||||
# Start the service and the run log is saved in log.txt
|
||||
python3 classification_web_service.py &>log.txt &
|
||||
```
|
||||
- send request:
|
||||
```shell
|
||||
# send service request
|
||||
python3.7 pipeline_http_client.py
|
||||
```
|
||||
After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows:
|
||||
```log
|
||||
{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors ': []}
|
||||
```
|
||||
- turn off the service
|
||||
If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program:
|
||||
```bash
|
||||
python3.7 -m paddle_serving_server.serve stop
|
||||
```
|
||||
After the execution is completed, the `Process stopped` message appears, indicating that the service was successfully shut down.
|
||||
|
||||
Once the service is successfully started, a log will be printed in log.txt similar to the following 
|
||||
<a name="3.2.2"></a>
|
||||
#### 3.2.2 C++ Serving
|
||||
|
||||
- Send request:
|
||||
Different from Python Serving, the C++ Serving client calls C++ OP to predict, so before starting the service, you need to compile and install the serving server package, and set `SERVING_BIN`.
|
||||
|
||||
```
|
||||
# Send service request
|
||||
python3 pipeline_http_client.py
|
||||
```
|
||||
- Compile and install the Serving server package
|
||||
```shell
|
||||
# Enter the working directory
|
||||
cd PaddleClas/deploy/paddleserving
|
||||
# One-click compile and install Serving server, set SERVING_BIN
|
||||
source ./build_server.sh python3.7
|
||||
```
|
||||
**Note: The path set by **[build_server.sh](./build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled.
|
||||
|
||||
Once the service is successfully started, the prediction results will be printed in the cmd window, see the following example:
|
||||
- Modify the client file `ResNet50_client/serving_client_conf.prototxt` , change the field after `feed_type:` to 20, change the field after the first `shape:` to 1 and delete the rest of the `shape` fields.
|
||||
```log
|
||||
feed_var {
|
||||
name: "inputs"
|
||||
alias_name: "inputs"
|
||||
is_lod_tensor: false
|
||||
feed_type: 20
|
||||
shape: 1
|
||||
}
|
||||
```
|
||||
- Modify part of the code of [`test_cpp_serving_client`](./test_cpp_serving_client.py)
|
||||
1. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` .
|
||||
2. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment.
|
||||
|
||||
- Start the service:
|
||||
```shell
|
||||
# Start the service, the service runs in the background, and the running log is saved in nohup.txt
|
||||
# CPU deployment
|
||||
sh run_cpp_serving.sh
|
||||
# GPU deployment and specify card 0
|
||||
sh run_cpp_serving.sh 0
|
||||
```
|
||||
|
||||
- send request:
|
||||
```shell
|
||||
# send service request
|
||||
python3.7 test_cpp_serving_client.py
|
||||
```
|
||||
After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows:
|
||||
```log
|
||||
prediction: daisy, probability: 0.9341399073600769
|
||||
```
|
||||
- close the service:
|
||||
If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program:
|
||||
```bash
|
||||
python3.7 -m paddle_serving_server.serve stop
|
||||
```
|
||||
After the execution is completed, the `Process stopped` message appears, indicating that the service was successfully shut down.
|
||||
|
||||
|
||||
<a name="4"></a>
|
||||
## 4. Service Deployment for Image Recognition
|
||||
|
||||
When using PaddleServing for service deployment, the saved inference model needs to be converted to a Serving model. The following part, exemplified by the ultra-lightweight model for image recognition in PP-ShiTu, details the deployment of image recognition service.
|
||||
|
||||
## 4. Image recognition service deployment
|
||||
|
||||
When using PaddleServing for image recognition service deployment, **need to convert multiple saved inference models to Serving models**. The following takes the ultra-lightweight image recognition model in PP-ShiTu as an example to introduce the deployment of image recognition services.
|
||||
<a name="4.1"></a>
|
||||
## 4.1 Model Transformation
|
||||
### 4.1 Model conversion
|
||||
|
||||
- Download inference models for general detection and general recognition
|
||||
- Go to the working directory:
|
||||
```shell
|
||||
cd deploy/
|
||||
```
|
||||
- Download generic detection inference model and generic recognition inference model
|
||||
```shell
|
||||
# Create and enter the models folder
|
||||
mkdir models
|
||||
cd models
|
||||
# Download and unzip the generic recognition model
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
|
||||
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
|
||||
# Download and unzip the generic detection model
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
|
||||
tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
|
||||
```
|
||||
- Convert the generic recognition inference model to the Serving model:
|
||||
```shell
|
||||
# Convert the generic recognition model
|
||||
python3.7 -m paddle_serving_client.convert \
|
||||
--dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
|
||||
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
|
||||
```
|
||||
The meaning of the parameters of the above command is the same as [#4.1 Model conversion](#4.1)
|
||||
|
||||
```
|
||||
cd deploy
|
||||
# Download and decompress general recogntion models
|
||||
wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
|
||||
cd models
|
||||
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
|
||||
# Download and decompress general detection models
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
|
||||
tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
|
||||
```
|
||||
After the recognition inference model is converted, there will be additional folders `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` in the current folder. Modify the name of `alias` in `serving_server_conf.prototxt` in `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` directories respectively: Change `alias_name` in `fetch_var` to `features`. The content of the modified `serving_server_conf.prototxt` is as follows
|
||||
|
||||
- Convert the inference model for recognition into a Serving model:
|
||||
|
||||
```
|
||||
# Convert the recognition model
|
||||
python3 -m paddle_serving_client.convert --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
|
||||
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
|
||||
```
|
||||
|
||||
After the transformation, `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_serving/` will be added to the current folder. Modify the alias name in serving_server_conf.prototxt under the directory `general_PPLCNet_x2_5_lite_v1.0_serving/` by changing `alias_name` to `features` in `fetch_var`. The modified serving_server_conf.prototxt is similar to the following:
|
||||
|
||||
```
|
||||
feed_var {
|
||||
```log
|
||||
feed_var {
|
||||
name: "x"
|
||||
alias_name: "x"
|
||||
is_lod_tensor: false
|
||||
|
@ -194,75 +263,163 @@ feed_var {
|
|||
shape: 3
|
||||
shape: 224
|
||||
shape: 224
|
||||
}
|
||||
fetch_var {
|
||||
name: "save_infer_model/scale_0.tmp_1"
|
||||
alias_name: "features"
|
||||
is_lod_tensor: true
|
||||
fetch_type: 1
|
||||
shape: -1
|
||||
}
|
||||
```
|
||||
}
|
||||
fetch_var {
|
||||
name: "save_infer_model/scale_0.tmp_1"
|
||||
alias_name: "features"
|
||||
is_lod_tensor: false
|
||||
fetch_type: 1
|
||||
shape: 512
|
||||
}
|
||||
```
|
||||
|
||||
- Convert the inference model for detection into a Serving model:
|
||||
After the conversion of the general recognition inference model is completed, there will be additional `general_PPLCNet_x2_5_lite_v1.0_serving/` and `general_PPLCNet_x2_5_lite_v1.0_client/` folders in the current folder, with the following structure:
|
||||
```shell
|
||||
├── general_PPLCNet_x2_5_lite_v1.0_serving/
|
||||
│ ├── inference.pdiparams
|
||||
│ ├── inference.pdmodel
|
||||
│ ├── serving_server_conf.prototxt
|
||||
│ └── serving_server_conf.stream.prototxt
|
||||
│
|
||||
└── general_PPLCNet_x2_5_lite_v1.0_client/
|
||||
├── serving_client_conf.prototxt
|
||||
└── serving_client_conf.stream.prototxt
|
||||
```
|
||||
- Convert general detection inference model to Serving model:
|
||||
```shell
|
||||
# Convert generic detection model
|
||||
python3.7 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
|
||||
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
|
||||
```
|
||||
The meaning of the parameters of the above command is the same as [#4.1 Model conversion](#4.1)
|
||||
|
||||
```
|
||||
# Convert the general detection model
|
||||
python3 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
|
||||
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
|
||||
```
|
||||
|
||||
After the transformation, `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_ mainbody_lite_v1.0_client/` will be added to the current folder.
|
||||
|
||||
**Note:** The alias name in the serving_server_conf.prototxt under the directory`picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` requires no modification.
|
||||
|
||||
- Download and decompress the constructed search library index
|
||||
|
||||
```
|
||||
cd ../
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar
|
||||
```
|
||||
After the conversion of the general detection inference model is completed, there will be additional folders `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` and `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` in the current folder, with the following structure:
|
||||
```shell
|
||||
├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/
|
||||
│ ├── inference.pdiparams
|
||||
│ ├── inference.pdmodel
|
||||
│ ├── serving_server_conf.prototxt
|
||||
│ └── serving_server_conf.stream.prototxt
|
||||
│
|
||||
└── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
|
||||
├── serving_client_conf.prototxt
|
||||
└── serving_client_conf.stream.prototxt
|
||||
```
|
||||
The specific meaning of the parameters in the above command is shown in the following table
|
||||
| parameter | type | default value | description |
|
||||
| ----------------- | ---- | ------------------ | ----------------------------------------------------- |
|
||||
| `dirname` | str | - | The storage path of the model file to be converted. The program structure file and parameter file are saved in this directory.|
|
||||
| `model_filename` | str | None | The name of the file storing the model Inference Program structure that needs to be converted. If set to None, use `__model__` as the default filename |
|
||||
| `params_filename` | str | None | The name of the file that stores all parameters of the model that need to be transformed. It needs to be specified if and only if all model parameters are stored in a single binary file. If the model parameters are stored in separate files, set it to None |
|
||||
| `serving_server` | str | `"serving_server"` | The storage path of the converted model files and configuration files. Default is serving_server |
|
||||
| `serving_client` | str | `"serving_client"` | The converted client configuration file storage path. Default is |
|
||||
|
||||
- Download and unzip the index of the retrieval library that has been built
|
||||
```shell
|
||||
# Go back to the deploy directory
|
||||
cd ../
|
||||
# Download the built retrieval library index
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar
|
||||
# Decompress the built retrieval library index
|
||||
tar -xf drink_dataset_v1.0.tar
|
||||
```
|
||||
<a name="4.2"></a>
|
||||
## 4.2 Service Deployment and Request
|
||||
### 4.2 Service deployment and request
|
||||
|
||||
**Note:** Since the recognition service involves multiple models, PipeLine is adopted for better performance. This deployment method does not support the windows platform for now.
|
||||
**Note:** The identification service involves multiple models, and the PipeLine deployment method is used for performance reasons. The Pipeline deployment method currently does not support the windows platform.
|
||||
- go to the working directory
|
||||
```shell
|
||||
cd ./deploy/paddleserving/recognition
|
||||
```
|
||||
The paddleserving directory contains code to start the Python Pipeline service, the C++ Serving service, and send prediction requests, including:
|
||||
```shell
|
||||
__init__.py
|
||||
config.yml # The configuration file to start the python pipeline service
|
||||
pipeline_http_client.py # Script for sending pipeline prediction requests in http mode
|
||||
pipeline_rpc_client.py # Script for sending pipeline prediction requests in rpc mode
|
||||
recognition_web_service.py # Script to start the pipeline server
|
||||
readme.md # Recognition model service deployment documents
|
||||
run_cpp_serving.sh # Script to start C++ Pipeline Serving deployment
|
||||
test_cpp_serving_client.py # Script for sending C++ Pipeline serving prediction requests by rpc
|
||||
```
|
||||
|
||||
- Enter the working directory
|
||||
<a name="4.2.1"></a>
|
||||
#### 4.2.1 Python Serving
|
||||
|
||||
```
|
||||
cd ./deploy/paddleserving/recognition
|
||||
```
|
||||
- Start the service:
|
||||
```shell
|
||||
# Start the service and save the running log in log.txt
|
||||
python3.7 recognition_web_service.py &>log.txt &
|
||||
```
|
||||
|
||||
Paddleserving's directory contains the code to start the pipeline service and send prediction requests, including:
|
||||
- send request:
|
||||
```shell
|
||||
python3.7 pipeline_http_client.py
|
||||
```
|
||||
After a successful run, the results of the model prediction will be printed in the cmd window, and the results are as follows:
|
||||
```log
|
||||
{'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': 'Red Bull-Enhanced', 'rec_scores': 0.79903316}]"], 'tensors': []}
|
||||
```
|
||||
|
||||
```
|
||||
__init__.py
|
||||
config.yml # Configuration file for starting the service
|
||||
pipeline_http_client.py # Script for sending pipeline prediction requests by http
|
||||
pipeline_rpc_client.py # Script for sending pipeline prediction requests by rpc
|
||||
recognition_web_service.py # Script for starting the pipeline server
|
||||
```
|
||||
<a name="4.2.2"></a>
|
||||
#### 4.2.2 C++ Serving
|
||||
|
||||
- Start the service:
|
||||
Different from Python Serving, the C++ Serving client calls C++ OP to predict, so before starting the service, you need to compile and install the serving server package, and set `SERVING_BIN`.
|
||||
- Compile and install the Serving server package
|
||||
```shell
|
||||
# Enter the working directory
|
||||
cd PaddleClas/deploy/paddleserving
|
||||
# One-click compile and install Serving server, set SERVING_BIN
|
||||
source ./build_server.sh python3.7
|
||||
```
|
||||
**Note:** The path set by [build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled.
|
||||
|
||||
```
|
||||
# Start the service and the run log is saved in log.txt
|
||||
python3 recognition_web_service.py &>log.txt &
|
||||
```
|
||||
- The input and output format used by C++ Serving is different from that of Python, so you need to execute the following command to overwrite the files below [3.1] (#31-model conversion) by copying the 4 files to get the corresponding 4 prototxt files in the folder.
|
||||
```shell
|
||||
# Enter PaddleClas/deploy directory
|
||||
cd PaddleClas/deploy/
|
||||
|
||||
Once the service is successfully started, a log will be printed in log.txt similar to the following 
|
||||
# Overwrite prototxt file
|
||||
\cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_serving/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_serving/
|
||||
\cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_client/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_client/
|
||||
\cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
|
||||
\cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/
|
||||
```
|
||||
|
||||
- Send request:
|
||||
- Start the service:
|
||||
```shell
|
||||
# Enter the working directory
|
||||
cd PaddleClas/deploy/paddleserving/recognition
|
||||
|
||||
```
|
||||
python3 pipeline_http_client.py
|
||||
```
|
||||
# The default port number is 9400; the running log is saved in log_PPShiTu.txt by default
|
||||
# CPU deployment
|
||||
sh run_cpp_serving.sh
|
||||
# GPU deployment, and specify card 0
|
||||
sh run_cpp_serving.sh 0
|
||||
```
|
||||
|
||||
Once the service is successfully started, the prediction results will be printed in the cmd window, see the following example: 
|
||||
- send request:
|
||||
```shell
|
||||
# send service request
|
||||
python3.7 test_cpp_serving_client.py
|
||||
```
|
||||
After a successful run, the results of the model predictions are printed in the client's terminal window as follows:
|
||||
```log
|
||||
WARNING: Logging before InitGoogleLogging() is written to STDERR
|
||||
I0614 03:01:36.273097 6084 naming_service_thread.cpp:202] brpc::policy::ListNamingService("127.0.0.1:9400"): added 1
|
||||
I0614 03:01:37.393564 6084 general_model.cpp:490] [client]logid=0,client_cost=1107.82ms,server_cost=1101.75ms.
|
||||
[{'bbox': [345, 95, 524, 585], 'rec_docs': 'Red Bull-Enhanced', 'rec_scores': 0.8073724}]
|
||||
```
|
||||
|
||||
- close the service:
|
||||
If the service program is running in the foreground, you can press `Ctrl+C` to terminate the server program; if it is running in the background, you can use the kill command to close related processes, or you can execute the following command in the path where the service program is started to terminate the server program:
|
||||
```bash
|
||||
python3.7 -m paddle_serving_server.serve stop
|
||||
```
|
||||
After the execution is completed, the `Process stopped` message appears, indicating that the service was successfully shut down.
|
||||
|
||||
|
||||
<a name="5"></a>
|
||||
|
|
|
@ -61,138 +61,229 @@ pip3 install paddle-serving-server-gpu==0.7.0.post112 # GPU with CUDA11.2 + Tens
|
|||
|
||||
<a name="3"></a>
|
||||
|
||||
## 3. 图像分类服务部署
|
||||
下面以经典的 ResNet50_vd 模型为例,介绍如何部署图像分类服务。
|
||||
|
||||
<a name="3.1"></a>
|
||||
### 3.1 模型转换
|
||||
使用 PaddleServing 做服务化部署时,需要将保存的 inference 模型转换为 Serving 模型。下面以经典的 ResNet50_vd 模型为例,介绍如何部署图像分类服务。
|
||||
- 进入工作目录:
|
||||
```shell
|
||||
cd deploy/paddleserving
|
||||
```
|
||||
- 下载 ResNet50_vd 的 inference 模型:
|
||||
```shell
|
||||
# 下载并解压 ResNet50_vd 模型
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
|
||||
```
|
||||
- 用 paddle_serving_client 把下载的 inference 模型转换成易于 Server 部署的模型格式:
|
||||
```
|
||||
# 转换 ResNet50_vd 模型
|
||||
python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./ResNet50_vd_serving/ \
|
||||
--serving_client ./ResNet50_vd_client/
|
||||
```
|
||||
ResNet50_vd 推理模型转换完成后,会在当前文件夹多出 `ResNet50_vd_serving` 和 `ResNet50_vd_client` 的文件夹,具备如下格式:
|
||||
```
|
||||
|- ResNet50_vd_serving/
|
||||
|- inference.pdiparams
|
||||
|- inference.pdmodel
|
||||
|- serving_server_conf.prototxt
|
||||
|- serving_server_conf.stream.prototxt
|
||||
|- ResNet50_vd_client
|
||||
|- serving_client_conf.prototxt
|
||||
|- serving_client_conf.stream.prototxt
|
||||
```
|
||||
得到模型文件之后,需要分别修改 `ResNet50_vd_serving` 和 `ResNet50_vd_client` 下文件 `serving_server_conf.prototxt` 中的 alias 名字:将 `fetch_var` 中的 `alias_name` 改为 `prediction`
|
||||
|
||||
**备注**: Serving 为了兼容不同模型的部署,提供了输入输出重命名的功能。这样,不同的模型在推理部署时,只需要修改配置文件的 alias_name 即可,无需修改代码即可完成推理部署。
|
||||
修改后的 serving_server_conf.prototxt 如下所示:
|
||||
```
|
||||
feed_var {
|
||||
name: "inputs"
|
||||
alias_name: "inputs"
|
||||
is_lod_tensor: false
|
||||
feed_type: 1
|
||||
shape: 3
|
||||
shape: 224
|
||||
shape: 224
|
||||
}
|
||||
fetch_var {
|
||||
name: "save_infer_model/scale_0.tmp_1"
|
||||
alias_name: "prediction"
|
||||
is_lod_tensor: false
|
||||
fetch_type: 1
|
||||
shape: 1000
|
||||
}
|
||||
```
|
||||
使用 PaddleServing 做服务化部署时,需要将保存的 inference 模型转换为 Serving 模型。
|
||||
- 进入工作目录:
|
||||
```shell
|
||||
cd deploy/paddleserving
|
||||
```
|
||||
- 下载并解压 ResNet50_vd 的 inference 模型:
|
||||
```shell
|
||||
# 下载 ResNet50_vd inference 模型
|
||||
wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar
|
||||
# 解压 ResNet50_vd inference 模型
|
||||
tar xf ResNet50_vd_infer.tar
|
||||
```
|
||||
- 用 paddle_serving_client 命令把下载的 inference 模型转换成易于 Server 部署的模型格式:
|
||||
```shell
|
||||
# 转换 ResNet50_vd 模型
|
||||
python3.7 -m paddle_serving_client.convert \
|
||||
--dirname ./ResNet50_vd_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./ResNet50_vd_serving/ \
|
||||
--serving_client ./ResNet50_vd_client/
|
||||
```
|
||||
上述命令中参数具体含义如下表所示
|
||||
| 参数 | 类型 | 默认值 | 描述 |
|
||||
| ----------------- | ---- | ------------------ | ------------------------------------------------------------ |
|
||||
| `dirname` | str | - | 需要转换的模型文件存储路径,Program结构文件和参数文件均保存在此目录。 |
|
||||
| `model_filename` | str | None | 存储需要转换的模型Inference Program结构的文件名称。如果设置为None,则使用 `__model__` 作为默认的文件名 |
|
||||
| `params_filename` | str | None | 存储需要转换的模型所有参数的文件名称。当且仅当所有模型参数被保>存在一个单独的二进制文件中,它才需要被指定。如果模型参数是存储在各自分离的文件中,设置它的值为None |
|
||||
| `serving_server` | str | `"serving_server"` | 转换后的模型文件和配置文件的存储路径。默认值为serving_server |
|
||||
| `serving_client` | str | `"serving_client"` | 转换后的客户端配置文件存储路径。默认值为serving_client |
|
||||
|
||||
ResNet50_vd 推理模型转换完成后,会在当前文件夹多出 `ResNet50_vd_serving` 和 `ResNet50_vd_client` 的文件夹,具备如下结构:
|
||||
```shell
|
||||
├── ResNet50_vd_serving/
|
||||
│ ├── inference.pdiparams
|
||||
│ ├── inference.pdmodel
|
||||
│ ├── serving_server_conf.prototxt
|
||||
│ └── serving_server_conf.stream.prototxt
|
||||
│
|
||||
└── ResNet50_vd_client/
|
||||
├── serving_client_conf.prototxt
|
||||
└── serving_client_conf.stream.prototxt
|
||||
```
|
||||
|
||||
- Serving 为了兼容不同模型的部署,提供了输入输出重命名的功能。让不同的模型在推理部署时,只需要修改配置文件的 `alias_name` 即可,无需修改代码即可完成推理部署。因此在转换完毕后需要分别修改 `ResNet50_vd_serving` 下的文件 `serving_server_conf.prototxt` 和 `ResNet50_vd_client` 下的文件 `serving_client_conf.prototxt`,将 `fetch_var` 中 `alias_name:` 后的字段改为 `prediction`,修改后的 `serving_server_conf.prototxt` 和 `serving_client_conf.prototxt` 如下所示:
|
||||
```log
|
||||
feed_var {
|
||||
name: "inputs"
|
||||
alias_name: "inputs"
|
||||
is_lod_tensor: false
|
||||
feed_type: 1
|
||||
shape: 3
|
||||
shape: 224
|
||||
shape: 224
|
||||
}
|
||||
fetch_var {
|
||||
name: "save_infer_model/scale_0.tmp_1"
|
||||
alias_name: "prediction"
|
||||
is_lod_tensor: false
|
||||
fetch_type: 1
|
||||
shape: 1000
|
||||
}
|
||||
```
|
||||
<a name="3.2"></a>
|
||||
### 3.2 服务部署和请求
|
||||
paddleserving 目录包含了启动 pipeline 服务、C++ serving服务和发送预测请求的代码,包括:
|
||||
|
||||
paddleserving 目录包含了启动 pipeline 服务、C++ serving服务和发送预测请求的代码,主要包括:
|
||||
```shell
|
||||
__init__.py
|
||||
config.yml # 启动pipeline服务的配置文件
|
||||
pipeline_http_client.py # http方式发送pipeline预测请求的脚本
|
||||
pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
|
||||
classification_web_service.py # 启动pipeline服务端的脚本
|
||||
run_cpp_serving.sh # 启动C++ Serving部署的脚本
|
||||
test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
|
||||
classification_web_service.py # 启动pipeline服务端的脚本
|
||||
config.yml # 启动pipeline服务的配置文件
|
||||
pipeline_http_client.py # http方式发送pipeline预测请求的脚本
|
||||
pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
|
||||
readme.md # 分类模型服务化部署文档
|
||||
run_cpp_serving.sh # 启动C++ Serving部署的脚本
|
||||
test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
|
||||
```
|
||||
<a name="3.2.1"></a>
|
||||
#### 3.2.1 Python Serving
|
||||
|
||||
- 启动服务:
|
||||
```shell
|
||||
# 启动服务,运行日志保存在 log.txt
|
||||
python3 classification_web_service.py &>log.txt &
|
||||
```
|
||||
```shell
|
||||
# 启动服务,运行日志保存在 log.txt
|
||||
python3.7 classification_web_service.py &>log.txt &
|
||||
```
|
||||
|
||||
- 发送请求:
|
||||
```shell
|
||||
# 发送服务请求
|
||||
python3 pipeline_http_client.py
|
||||
```
|
||||
成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下:
|
||||
```
|
||||
{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors': []}
|
||||
```
|
||||
```shell
|
||||
# 发送服务请求
|
||||
python3.7 pipeline_http_client.py
|
||||
```
|
||||
成功运行后,模型预测的结果会打印在客户端中,如下所示:
|
||||
```log
|
||||
{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors': []}
|
||||
```
|
||||
- 关闭服务
|
||||
如果服务程序在前台运行,可以按下`Ctrl+C`来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序:
|
||||
```bash
|
||||
python3.7 -m paddle_serving_server.serve stop
|
||||
```
|
||||
执行完毕后出现`Process stopped`信息表示成功关闭服务。
|
||||
|
||||
<a name="3.2.2"></a>
|
||||
#### 3.2.2 C++ Serving
|
||||
|
||||
与Python Serving不同,C++ Serving客户端调用 C++ OP来预测,因此在启动服务之前,需要编译并安装 serving server包,并设置 `SERVING_BIN`。
|
||||
|
||||
- 编译并安装Serving server包
|
||||
```shell
|
||||
# 进入工作目录
|
||||
cd PaddleClas/deploy/paddleserving
|
||||
# 一键编译安装Serving server、设置 SERVING_BIN
|
||||
source ./build_server.sh python3.7
|
||||
```
|
||||
**注:**[build_server.sh](./build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。
|
||||
|
||||
- 修改客户端文件 `ResNet50_vd_client/serving_client_conf.prototxt` ,将 `feed_type:` 后的字段改为20,将第一个 `shape:` 后的字段改为1并删掉其余的 `shape` 字段。
|
||||
```log
|
||||
feed_var {
|
||||
name: "inputs"
|
||||
alias_name: "inputs"
|
||||
is_lod_tensor: false
|
||||
feed_type: 20
|
||||
shape: 1
|
||||
}
|
||||
```
|
||||
- 修改 [`test_cpp_serving_client`](./test_cpp_serving_client.py) 的部分代码
|
||||
1. 修改 [`load_client_config`](./test_cpp_serving_client.py#L28) 处的代码,将 `load_client_config` 后的路径改为 `ResNet50_vd_client/serving_client_conf.prototxt` 。
|
||||
2. 修改 [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) 处的代码,将 `inputs` 改为与 `ResNet50_vd_client/serving_client_conf.prototxt` 中 `feed_var` 字段下面的 `name` 一致。由于部分模型client文件中的 `name` 为 `x` 而不是 `inputs` ,因此使用这些模型进行C++ Serving部署时需要注意这一点。
|
||||
|
||||
- 启动服务:
|
||||
```shell
|
||||
# 启动服务, 服务在后台运行,运行日志保存在 nohup.txt
|
||||
sh run_cpp_serving.sh
|
||||
```
|
||||
```shell
|
||||
# 启动服务, 服务在后台运行,运行日志保存在 nohup.txt
|
||||
# CPU部署
|
||||
bash run_cpp_serving.sh
|
||||
# GPU部署并指定0号卡
|
||||
bash run_cpp_serving.sh 0
|
||||
```
|
||||
|
||||
- 发送请求:
|
||||
```shell
|
||||
# 发送服务请求
|
||||
python3 test_cpp_serving_client.py
|
||||
```
|
||||
成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下:
|
||||
```
|
||||
prediction: daisy, probability: 0.9341399073600769
|
||||
```
|
||||
```shell
|
||||
# 发送服务请求
|
||||
python3.7 test_cpp_serving_client.py
|
||||
```
|
||||
成功运行后,模型预测的结果会打印在客户端中,如下所示:
|
||||
```log
|
||||
prediction: daisy, probability: 0.9341399073600769
|
||||
```
|
||||
- 关闭服务:
|
||||
如果服务程序在前台运行,可以按下`Ctrl+C`来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序:
|
||||
```bash
|
||||
python3.7 -m paddle_serving_server.serve stop
|
||||
```
|
||||
执行完毕后出现`Process stopped`信息表示成功关闭服务。
|
||||
|
||||
|
||||
<a name="4"></a>
|
||||
## 4.图像识别服务部署
|
||||
使用 PaddleServing 做服务化部署时,需要将保存的 inference 模型转换为 Serving 模型。 下面以 PP-ShiTu 中的超轻量图像识别模型为例,介绍图像识别服务的部署。
|
||||
## 4. 图像识别服务部署
|
||||
|
||||
使用 PaddleServing 做图像识别服务化部署时,**需要将保存的多个 inference 模型都转换为 Serving 模型**。 下面以 PP-ShiTu 中的超轻量图像识别模型为例,介绍图像识别服务的部署。
|
||||
<a name="4.1"></a>
|
||||
## 4.1 模型转换
|
||||
|
||||
### 4.1 模型转换
|
||||
|
||||
- 进入工作目录:
|
||||
```shell
|
||||
cd deploy/
|
||||
```
|
||||
- 下载通用检测 inference 模型和通用识别 inference 模型
|
||||
```
|
||||
cd deploy
|
||||
# 下载并解压通用识别模型
|
||||
wget -P models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
|
||||
cd models
|
||||
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
|
||||
# 下载并解压通用检测模型
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
|
||||
tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
|
||||
```
|
||||
- 转换识别 inference 模型为 Serving 模型:
|
||||
```
|
||||
# 转换识别模型
|
||||
python3 -m paddle_serving_client.convert --dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
|
||||
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
|
||||
```
|
||||
识别推理模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹。分别修改 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 目录下的 serving_server_conf.prototxt 中的 alias 名字: 将 `fetch_var` 中的 `alias_name` 改为 `features`。
|
||||
修改后的 serving_server_conf.prototxt 内容如下:
|
||||
```
|
||||
feed_var {
|
||||
```shell
|
||||
# 创建并进入models文件夹
|
||||
mkdir models
|
||||
cd models
|
||||
# 下载并解压通用识别模型
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/general_PPLCNet_x2_5_lite_v1.0_infer.tar
|
||||
tar -xf general_PPLCNet_x2_5_lite_v1.0_infer.tar
|
||||
# 下载并解压通用检测模型
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
|
||||
tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
|
||||
```
|
||||
- 转换通用识别 inference 模型为 Serving 模型:
|
||||
```shell
|
||||
# 转换通用识别模型
|
||||
python3.7 -m paddle_serving_client.convert \
|
||||
--dirname ./general_PPLCNet_x2_5_lite_v1.0_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./general_PPLCNet_x2_5_lite_v1.0_serving/ \
|
||||
--serving_client ./general_PPLCNet_x2_5_lite_v1.0_client/
|
||||
```
|
||||
上述命令的参数含义与[#4.1 模型转换](#4.1)相同
|
||||
通用识别 inference 模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹,具备如下结构:
|
||||
```shell
|
||||
├── general_PPLCNet_x2_5_lite_v1.0_serving/
|
||||
│ ├── inference.pdiparams
|
||||
│ ├── inference.pdmodel
|
||||
│ ├── serving_server_conf.prototxt
|
||||
│ └── serving_server_conf.stream.prototxt
|
||||
│
|
||||
└── general_PPLCNet_x2_5_lite_v1.0_client/
|
||||
├── serving_client_conf.prototxt
|
||||
└── serving_client_conf.stream.prototxt
|
||||
```
|
||||
- 转换通用检测 inference 模型为 Serving 模型:
|
||||
```shell
|
||||
# 转换通用检测模型
|
||||
python3.7 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
|
||||
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
|
||||
```
|
||||
上述命令的参数含义与[#4.1 模型转换](#4.1)相同
|
||||
|
||||
识别推理模型转换完成后,会在当前文件夹多出 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 的文件夹。分别修改 `general_PPLCNet_x2_5_lite_v1.0_serving/` 和 `general_PPLCNet_x2_5_lite_v1.0_client/` 目录下的 `serving_server_conf.prototxt` 中的 `alias` 名字: 将 `fetch_var` 中的 `alias_name` 改为 `features`。 修改后的 `serving_server_conf.prototxt` 内容如下
|
||||
|
||||
```log
|
||||
feed_var {
|
||||
name: "x"
|
||||
alias_name: "x"
|
||||
is_lod_tensor: false
|
||||
|
@ -200,85 +291,139 @@ feed_var {
|
|||
shape: 3
|
||||
shape: 224
|
||||
shape: 224
|
||||
}
|
||||
fetch_var {
|
||||
name: "save_infer_model/scale_0.tmp_1"
|
||||
alias_name: "features"
|
||||
is_lod_tensor: false
|
||||
fetch_type: 1
|
||||
shape: 512
|
||||
}
|
||||
```
|
||||
- 转换通用检测 inference 模型为 Serving 模型:
|
||||
```
|
||||
# 转换通用检测模型
|
||||
python3 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \
|
||||
--model_filename inference.pdmodel \
|
||||
--params_filename inference.pdiparams \
|
||||
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
|
||||
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
|
||||
```
|
||||
检测 inference 模型转换完成后,会在当前文件夹多出 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` 和 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` 的文件夹。
|
||||
}
|
||||
fetch_var {
|
||||
name: "save_infer_model/scale_0.tmp_1"
|
||||
alias_name: "features"
|
||||
is_lod_tensor: false
|
||||
fetch_type: 1
|
||||
shape: 512
|
||||
}
|
||||
```
|
||||
通用检测 inference 模型转换完成后,会在当前文件夹多出 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` 和 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/` 的文件夹,具备如下结构:
|
||||
```shell
|
||||
├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/
|
||||
│ ├── inference.pdiparams
|
||||
│ ├── inference.pdmodel
|
||||
│ ├── serving_server_conf.prototxt
|
||||
│ └── serving_server_conf.stream.prototxt
|
||||
│
|
||||
└── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
|
||||
├── serving_client_conf.prototxt
|
||||
└── serving_client_conf.stream.prototxt
|
||||
```
|
||||
上述命令中参数具体含义如下表所示
|
||||
| 参数 | 类型 | 默认值 | 描述 |
|
||||
| ----------------- | ---- | ------------------ | ------------------------------------------------------------ |
|
||||
| `dirname` | str | - | 需要转换的模型文件存储路径,Program结构文件和参数文件均保存在此目录。 |
|
||||
| `model_filename` | str | None | 存储需要转换的模型Inference Program结构的文件名称。如果设置为None,则使用 `__model__` 作为默认的文件名 |
|
||||
| `params_filename` | str | None | 存储需要转换的模型所有参数的文件名称。当且仅当所有模型参数被保>存在一个单独的二进制文件中,它才需要被指定。如果模型参数是存储在各自分离的文件中,设置它的值为None |
|
||||
| `serving_server` | str | `"serving_server"` | 转换后的模型文件和配置文件的存储路径。默认值为serving_server |
|
||||
| `serving_client` | str | `"serving_client"` | 转换后的客户端配置文件存储路径。默认值为serving_client |
|
||||
|
||||
**注意:** 此处不需要修改 `picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/` 目录下的 serving_server_conf.prototxt 中的 alias 名字。
|
||||
|
||||
- 下载并解压已经构建后的检索库 index
|
||||
```
|
||||
cd ../
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar && tar -xf drink_dataset_v1.0.tar
|
||||
```
|
||||
- 下载并解压已经构建后完成的检索库 index
|
||||
```shell
|
||||
# 回到deploy目录
|
||||
cd ../
|
||||
# 下载构建完成的检索库 index
|
||||
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v1.0.tar
|
||||
# 解压构建完成的检索库 index
|
||||
tar -xf drink_dataset_v1.0.tar
|
||||
```
|
||||
<a name="4.2"></a>
|
||||
## 4.2 服务部署和请求
|
||||
**注意:** 识别服务涉及到多个模型,出于性能考虑采用 PipeLine 部署方式。Pipeline 部署方式当前不支持 windows 平台。
|
||||
### 4.2 服务部署和请求
|
||||
|
||||
**注意:** 识别服务涉及到多个模型,出于性能考虑采用 PipeLine 部署方式。Pipeline 部署方式当前不支持 windows 平台。
|
||||
- 进入到工作目录
|
||||
```shell
|
||||
cd ./deploy/paddleserving/recognition
|
||||
```
|
||||
paddleserving 目录包含启动 Python Pipeline 服务、C++ Serving 服务和发送预测请求的代码,包括:
|
||||
```
|
||||
__init__.py
|
||||
config.yml # 启动python pipeline服务的配置文件
|
||||
pipeline_http_client.py # http方式发送pipeline预测请求的脚本
|
||||
pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
|
||||
recognition_web_service.py # 启动pipeline服务端的脚本
|
||||
run_cpp_serving.sh # 启动C++ Pipeline Serving部署的脚本
|
||||
test_cpp_serving_client.py # rpc方式发送C++ Pipeline serving预测请求的脚本
|
||||
```
|
||||
```shell
|
||||
cd ./deploy/paddleserving/recognition
|
||||
```
|
||||
paddleserving 目录包含启动 Python Pipeline 服务、C++ Serving 服务和发送预测请求的代码,包括:
|
||||
```shell
|
||||
__init__.py
|
||||
config.yml # 启动python pipeline服务的配置文件
|
||||
pipeline_http_client.py # http方式发送pipeline预测请求的脚本
|
||||
pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本
|
||||
recognition_web_service.py # 启动pipeline服务端的脚本
|
||||
readme.md # 识别模型服务化部署文档
|
||||
run_cpp_serving.sh # 启动C++ Pipeline Serving部署的脚本
|
||||
test_cpp_serving_client.py # rpc方式发送C++ Pipeline serving预测请求的脚本
|
||||
```
|
||||
|
||||
<a name="4.2.1"></a>
|
||||
#### 4.2.1 Python Serving
|
||||
|
||||
- 启动服务:
|
||||
```
|
||||
# 启动服务,运行日志保存在 log.txt
|
||||
python3 recognition_web_service.py &>log.txt &
|
||||
```
|
||||
```shell
|
||||
# 启动服务,运行日志保存在 log.txt
|
||||
python3.7 recognition_web_service.py &>log.txt &
|
||||
```
|
||||
|
||||
- 发送请求:
|
||||
```
|
||||
python3 pipeline_http_client.py
|
||||
```
|
||||
成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下:
|
||||
```
|
||||
{'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': '红牛-强化型', 'rec_scores': 0.79903316}]"], 'tensors': []}
|
||||
```
|
||||
```shell
|
||||
python3.7 pipeline_http_client.py
|
||||
```
|
||||
成功运行后,模型预测的结果会打印在客户端中,如下所示:
|
||||
```log
|
||||
{'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["[{'bbox': [345, 95, 524, 576], 'rec_docs': '红牛-强化型', 'rec_scores': 0.79903316}]"], 'tensors': []}
|
||||
```
|
||||
|
||||
<a name="4.2.2"></a>
|
||||
#### 4.2.2 C++ Serving
|
||||
|
||||
与Python Serving不同,C++ Serving客户端调用 C++ OP来预测,因此在启动服务之前,需要编译并安装 serving server包,并设置 `SERVING_BIN`。
|
||||
- 编译并安装Serving server包
|
||||
```shell
|
||||
# 进入工作目录
|
||||
cd PaddleClas/deploy/paddleserving
|
||||
# 一键编译安装Serving server、设置 SERVING_BIN
|
||||
source ./build_server.sh python3.7
|
||||
```
|
||||
**注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。
|
||||
|
||||
- C++ Serving使用的输入输出格式与Python不同,因此需要执行以下命令,将4个文件复制到下的文件覆盖掉[3.1](#31-模型转换)得到文件夹中的对应4个prototxt文件。
|
||||
```shell
|
||||
# 进入PaddleClas/deploy目录
|
||||
cd PaddleClas/deploy/
|
||||
|
||||
# 覆盖prototxt文件
|
||||
\cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_serving/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_serving/
|
||||
\cp ./paddleserving/recognition/preprocess/general_PPLCNet_x2_5_lite_v1.0_client/*.prototxt ./models/general_PPLCNet_x2_5_lite_v1.0_client/
|
||||
\cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/
|
||||
\cp ./paddleserving/recognition/preprocess/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/*.prototxt ./models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/
|
||||
```
|
||||
|
||||
- 启动服务:
|
||||
```shell
|
||||
# 启动服务: 此处会在后台同时启动主体检测和特征提取服务,端口号分别为9293和9294;
|
||||
# 运行日志分别保存在 log_mainbody_detection.txt 和 log_feature_extraction.txt中
|
||||
sh run_cpp_serving.sh
|
||||
```
|
||||
```shell
|
||||
# 进入工作目录
|
||||
cd PaddleClas/deploy/paddleserving/recognition
|
||||
|
||||
# 端口号默认为9400;运行日志默认保存在 log_PPShiTu.txt 中
|
||||
# CPU部署
|
||||
bash run_cpp_serving.sh
|
||||
# GPU部署,并指定第0号卡
|
||||
bash run_cpp_serving.sh 0
|
||||
```
|
||||
|
||||
- 发送请求:
|
||||
```shell
|
||||
# 发送服务请求
|
||||
python3 test_cpp_serving_client.py
|
||||
```
|
||||
成功运行后,模型预测的结果会打印在 cmd 窗口中,结果如下所示:
|
||||
```
|
||||
[{'bbox': [345, 95, 524, 586], 'rec_docs': '红牛-强化型', 'rec_scores': 0.8016462}]
|
||||
```shell
|
||||
# 发送服务请求
|
||||
python3.7 test_cpp_serving_client.py
|
||||
```
|
||||
成功运行后,模型预测的结果会打印在客户端中,如下所示:
|
||||
```log
|
||||
WARNING: Logging before InitGoogleLogging() is written to STDERR
|
||||
I0614 03:01:36.273097 6084 naming_service_thread.cpp:202] brpc::policy::ListNamingService("127.0.0.1:9400"): added 1
|
||||
I0614 03:01:37.393564 6084 general_model.cpp:490] [client]logid=0,client_cost=1107.82ms,server_cost=1101.75ms.
|
||||
[{'bbox': [345, 95, 524, 585], 'rec_docs': '红牛-强化型', 'rec_scores': 0.8073724}]
|
||||
```
|
||||
|
||||
- 关闭服务
|
||||
如果服务程序在前台运行,可以按下`Ctrl+C`来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序:
|
||||
```bash
|
||||
python3.7 -m paddle_serving_server.serve stop
|
||||
```
|
||||
执行完毕后出现`Process stopped`信息表示成功关闭服务。
|
||||
```
|
||||
|
||||
<a name="5"></a>
|
||||
|
|
Loading…
Reference in New Issue