`MMOCR` provides some utilities that facilitate the model serving process.
Here is a quick walkthrough of necessary steps that let the models to serve through an API.
## Install TorchServe
You can follow the steps on the [official website](https://github.com/pytorch/serve#install-torchserve-and-torch-model-archiver) to install `TorchServe` and
`torch-model-archiver`.
## Convert model from MMOCR to TorchServe
We provide a handy tool to convert any `.pth` model into `.mar` model
You can change such behavior by modifying and saving the contents below to `config.properties`, and running TorchServe with option `--ts-config config.properties`.
A better alternative to serve your models is through Docker. We provide a Dockerfile
that frees you from those tedious and error-prone environmental setup steps.
#### Build `mmocr-serve` Docker image
```shell
docker build -t mmocr-serve:latest docker/serve/
```
#### Run `mmocr-serve` with Docker
In order to run Docker in GPU, you need to install [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html); or you can omit the `--gpus` argument for a CPU-only session.
The command below will run `mmocr-serve` with a gpu, bind the ports of `8080` (inference),
`8081` (management) and `8082` (metrics) from container to `127.0.0.1`, and mount
the checkpoint folder `./checkpoints` from the host machine to `/home/model-server/model-store`
of the container. For more information, please check the official docs for [running TorchServe with docker](https://github.com/pytorch/serve/blob/master/docker/README.md#running-torchserve-in-a-production-docker-environment).
`realpath ./checkpoints` points to the absolute path of "./checkpoints", and you can replace it with the absolute path where you store torchserve models.