add build_server.sh and update docs hyperlink
parent
f34b7fe374
commit
d76705288d
|
@ -0,0 +1,88 @@
|
|||
# 使用镜像:
|
||||
# registry.baidubce.com/paddlepaddle/paddle:latest-dev-cuda10.1-cudnn7-gcc82
|
||||
|
||||
# 编译Serving Server:
|
||||
|
||||
# client和app可以直接使用release版本
|
||||
|
||||
# server因为加入了自定义OP,需要重新编译
|
||||
|
||||
# 默认编译时的${PWD}=PaddleClas/deploy/paddleserving/
|
||||
|
||||
python_name=${1:-'python'}
|
||||
|
||||
apt-get update
|
||||
apt install -y libcurl4-openssl-dev libbz2-dev
|
||||
wget -nc https://paddle-serving.bj.bcebos.com/others/centos_ssl.tar
|
||||
tar xf centos_ssl.tar
|
||||
rm -rf centos_ssl.tar
|
||||
mv libcrypto.so.1.0.2k /usr/lib/libcrypto.so.1.0.2k
|
||||
mv libssl.so.1.0.2k /usr/lib/libssl.so.1.0.2k
|
||||
ln -sf /usr/lib/libcrypto.so.1.0.2k /usr/lib/libcrypto.so.10
|
||||
ln -sf /usr/lib/libssl.so.1.0.2k /usr/lib/libssl.so.10
|
||||
ln -sf /usr/lib/libcrypto.so.10 /usr/lib/libcrypto.so
|
||||
ln -sf /usr/lib/libssl.so.10 /usr/lib/libssl.so
|
||||
|
||||
# 安装go依赖
|
||||
rm -rf /usr/local/go
|
||||
wget -qO- https://paddle-ci.cdn.bcebos.com/go1.17.2.linux-amd64.tar.gz | tar -xz -C /usr/local
|
||||
export GOROOT=/usr/local/go
|
||||
export GOPATH=/root/gopath
|
||||
export PATH=$PATH:$GOPATH/bin:$GOROOT/bin
|
||||
go env -w GO111MODULE=on
|
||||
go env -w GOPROXY=https://goproxy.cn,direct
|
||||
go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway@v1.15.2
|
||||
go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger@v1.15.2
|
||||
go install github.com/golang/protobuf/protoc-gen-go@v1.4.3
|
||||
go install google.golang.org/grpc@v1.33.0
|
||||
go env -w GO111MODULE=auto
|
||||
|
||||
# 下载opencv库
|
||||
wget https://paddle-qa.bj.bcebos.com/PaddleServing/opencv3.tar.gz
|
||||
tar -xvf opencv3.tar.gz
|
||||
rm -rf opencv3.tar.gz
|
||||
export OPENCV_DIR=$PWD/opencv3
|
||||
|
||||
# clone Serving
|
||||
git clone https://github.com/PaddlePaddle/Serving.git -b develop --depth=1
|
||||
|
||||
cd Serving # PaddleClas/deploy/paddleserving/Serving
|
||||
export Serving_repo_path=$PWD
|
||||
git submodule update --init --recursive
|
||||
${python_name} -m pip install -r python/requirements.txt
|
||||
|
||||
# set env
|
||||
export PYTHON_INCLUDE_DIR=$(${python_name} -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())")
|
||||
export PYTHON_LIBRARIES=$(${python_name} -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))")
|
||||
export PYTHON_EXECUTABLE=`which ${python_name}`
|
||||
|
||||
export CUDA_PATH='/usr/local/cuda'
|
||||
export CUDNN_LIBRARY='/usr/local/cuda/lib64/'
|
||||
export CUDA_CUDART_LIBRARY='/usr/local/cuda/lib64/'
|
||||
export TENSORRT_LIBRARY_PATH='/usr/local/TensorRT6-cuda10.1-cudnn7/targets/x86_64-linux-gnu/'
|
||||
|
||||
# cp 自定义OP代码
|
||||
\cp ../preprocess/general_clas_op.* ${Serving_repo_path}/core/general-server/op
|
||||
\cp ../preprocess/preprocess_op.* ${Serving_repo_path}/core/predictor/tools/pp_shitu_tools
|
||||
|
||||
# 编译Server
|
||||
mkdir server-build-gpu-opencv
|
||||
cd server-build-gpu-opencv
|
||||
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR \
|
||||
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
|
||||
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
|
||||
-DCUDA_TOOLKIT_ROOT_DIR=${CUDA_PATH} \
|
||||
-DCUDNN_LIBRARY=${CUDNN_LIBRARY} \
|
||||
-DCUDA_CUDART_LIBRARY=${CUDA_CUDART_LIBRARY} \
|
||||
-DTENSORRT_ROOT=${TENSORRT_LIBRARY_PATH} \
|
||||
-DOPENCV_DIR=${OPENCV_DIR} \
|
||||
-DWITH_OPENCV=ON \
|
||||
-DSERVER=ON \
|
||||
-DWITH_GPU=ON ..
|
||||
make -j32
|
||||
|
||||
${python_name} -m pip install python/dist/paddle*
|
||||
|
||||
# export SERVING_BIN
|
||||
export SERVING_BIN=$PWD/core/general-server/serving
|
||||
cd ../../
|
|
@ -175,7 +175,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s
|
|||
# One-click compile and install Serving server, set SERVING_BIN
|
||||
source ./build_server.sh python3.7
|
||||
```
|
||||
**Note: The path set by **[build_server.sh](./build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled.
|
||||
**Note: The path set by **[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled.
|
||||
|
||||
- Modify the client file `ResNet50_client/serving_client_conf.prototxt` , change the field after `feed_type:` to 20, change the field after the first `shape:` to 1 and delete the rest of the `shape` fields.
|
||||
```log
|
||||
|
@ -187,9 +187,9 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s
|
|||
shape: 1
|
||||
}
|
||||
```
|
||||
- Modify part of the code of [`test_cpp_serving_client`](./test_cpp_serving_client.py)
|
||||
1. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` .
|
||||
2. Modify the [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment.
|
||||
- Modify part of the code of [`test_cpp_serving_client`](../../../deploy/paddleserving/test_cpp_serving_client.py)
|
||||
1. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L28) part of the code, and change the path after `load_client_config` to `ResNet50_client/serving_client_conf.prototxt` .
|
||||
2. Modify the [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L45) part of the code, and change `inputs` to be the same as the `feed_var` field in `ResNet50_client/serving_client_conf.prototxt` name` is the same. Since `name` in some model client files is `x` instead of `inputs` , you need to pay attention to this when using these models for C++ Serving deployment.
|
||||
|
||||
- Start the service:
|
||||
```shell
|
||||
|
@ -375,7 +375,7 @@ Different from Python Serving, the C++ Serving client calls C++ OP to predict, s
|
|||
# One-click compile and install Serving server, set SERVING_BIN
|
||||
source ./build_server.sh python3.7
|
||||
```
|
||||
**Note:** The path set by [build_server.sh](../build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled.
|
||||
**Note:** The path set by [build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62) may need to be modified according to the actual machine environment such as CUDA, python version, etc., and then compiled.
|
||||
|
||||
- The input and output format used by C++ Serving is different from that of Python, so you need to execute the following command to overwrite the files below [3.1] (#31-model conversion) by copying the 4 files to get the corresponding 4 prototxt files in the folder.
|
||||
```shell
|
||||
|
|
|
@ -180,7 +180,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
|
|||
# 一键编译安装Serving server、设置 SERVING_BIN
|
||||
source ./build_server.sh python3.7
|
||||
```
|
||||
**注:**[build_server.sh](./build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。
|
||||
**注:**[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。
|
||||
|
||||
- 修改客户端文件 `ResNet50_vd_client/serving_client_conf.prototxt` ,将 `feed_type:` 后的字段改为20,将第一个 `shape:` 后的字段改为1并删掉其余的 `shape` 字段。
|
||||
```log
|
||||
|
@ -192,9 +192,9 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
|
|||
shape: 1
|
||||
}
|
||||
```
|
||||
- 修改 [`test_cpp_serving_client`](./test_cpp_serving_client.py) 的部分代码
|
||||
1. 修改 [`load_client_config`](./test_cpp_serving_client.py#L28) 处的代码,将 `load_client_config` 后的路径改为 `ResNet50_vd_client/serving_client_conf.prototxt` 。
|
||||
2. 修改 [`feed={"inputs": image}`](./test_cpp_serving_client.py#L45) 处的代码,将 `inputs` 改为与 `ResNet50_vd_client/serving_client_conf.prototxt` 中 `feed_var` 字段下面的 `name` 一致。由于部分模型client文件中的 `name` 为 `x` 而不是 `inputs` ,因此使用这些模型进行C++ Serving部署时需要注意这一点。
|
||||
- 修改 [`test_cpp_serving_client`](../../../deploy/paddleserving/test_cpp_serving_client.py) 的部分代码
|
||||
1. 修改 [`load_client_config`](../../../deploy/paddleserving/test_cpp_serving_client.py#L28) 处的代码,将 `load_client_config` 后的路径改为 `ResNet50_vd_client/serving_client_conf.prototxt` 。
|
||||
2. 修改 [`feed={"inputs": image}`](../../../deploy/paddleserving/test_cpp_serving_client.py#L45) 处的代码,将 `inputs` 改为与 `ResNet50_vd_client/serving_client_conf.prototxt` 中 `feed_var` 字段下面的 `name` 一致。由于部分模型client文件中的 `name` 为 `x` 而不是 `inputs` ,因此使用这些模型进行C++ Serving部署时需要注意这一点。
|
||||
|
||||
- 启动服务:
|
||||
```shell
|
||||
|
@ -379,7 +379,7 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
|
|||
# 一键编译安装Serving server、设置 SERVING_BIN
|
||||
source ./build_server.sh python3.7
|
||||
```
|
||||
**注:**[build_server.sh](../build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。
|
||||
**注:**[build_server.sh](../../../deploy/paddleserving/build_server.sh#L55-L62)所设定的路径可能需要根据实际机器上的环境如CUDA、python版本等作一定修改,然后再编译。
|
||||
|
||||
- C++ Serving使用的输入输出格式与Python不同,因此需要执行以下命令,将4个文件复制到下的文件覆盖掉[3.1](#31-模型转换)得到文件夹中的对应4个prototxt文件。
|
||||
```shell
|
||||
|
@ -424,7 +424,6 @@ test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本
|
|||
python3.7 -m paddle_serving_server.serve stop
|
||||
```
|
||||
执行完毕后出现`Process stopped`信息表示成功关闭服务。
|
||||
```
|
||||
|
||||
<a name="5"></a>
|
||||
## 5.FAQ
|
||||
|
|
Loading…
Reference in New Issue