2.4 KiB
2.4 KiB
Model Serving
In order to serve an MMClassification
model with TorchServe
, you can follow the steps:
1. Convert model from MMClassification to TorchServe
python tools/deployment/mmcls2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
${MODEL_STORE} needs to be an absolute path to a folder.
Example:
python tools/deployment/mmcls2torchserve.py \
configs/resnet/resnet18_8xb32_in1k.py \
checkpoints/resnet18_8xb32_in1k_20210831-fbbb1da6.pth \
--output-folder ./checkpoints \
--model-name resnet18_in1k
2. Build mmcls-serve
docker image
docker build -t mmcls-serve:latest docker/serve/
3. Run mmcls-serve
Check the official docs for running TorchServe with docker.
In order to run in GPU, you need to install nvidia-docker. You can omit the --gpus
argument in order to run in GPU.
Example:
docker run --rm \
--cpus 8 \
--gpus device=0 \
-p8080:8080 -p8081:8081 -p8082:8082 \
--mount type=bind,source=`realpath ./checkpoints`,target=/home/model-server/model-store \
mmcls-serve:latest
`realpath ./checkpoints` points to the absolute path of "./checkpoints", and you can replace it with the absolute path where you store torchserve models.
Read the docs about the Inference (8080), Management (8081) and Metrics (8082) APis
4. Test deployment
curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T demo/demo.JPEG
You should obtain a response similar to:
{
"pred_label": 58,
"pred_score": 0.38102269172668457,
"pred_class": "water snake"
}
And you can use test_torchserver.py
to compare result of TorchServe and PyTorch, and visualize them.
python tools/deployment/test_torchserver.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--device ${DEVICE}]
Example:
python tools/deployment/test_torchserver.py \
demo/demo.JPEG \
configs/resnet/resnet18_8xb32_in1k.py \
checkpoints/resnet18_8xb32_in1k_20210831-fbbb1da6.pth \
resnet18_in1k