2023-03-03 14:51:38 +08:00
# \[WIP\] Useful Tools
2022-02-07 23:20:29 +08:00
2020-12-23 10:36:49 +08:00
Apart from training/testing scripts, We provide lots of useful tools under the
2022-07-05 15:58:48 +08:00
`tools/` directory.
2020-12-23 10:36:49 +08:00
2022-08-31 20:54:15 +08:00
## Analysis Tools
2020-12-23 10:36:49 +08:00
2022-08-31 20:54:15 +08:00
### Plot training logs
`tools/analyze_logs.py` plots loss/mIoU curves given a training log file. `pip install seaborn` first to install the dependency.
2020-12-23 10:36:49 +08:00
```shell
2022-08-31 20:54:15 +08:00
python tools/analysis_tools/analyze_logs.py xxx.json [--keys ${KEYS}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
2020-12-23 10:36:49 +08:00
```
2022-08-31 20:54:15 +08:00
Examples:
2020-12-23 10:36:49 +08:00
2022-08-31 20:54:15 +08:00
- Plot the mIoU, mAcc, aAcc metrics.
2020-12-23 10:36:49 +08:00
2022-08-31 20:54:15 +08:00
```shell
python tools/analysis_tools/analyze_logs.py log.json --keys mIoU mAcc aAcc --legend mIoU mAcc aAcc
```
2020-12-23 10:36:49 +08:00
2022-08-31 20:54:15 +08:00
- Plot loss metric.
2020-12-23 10:36:49 +08:00
2022-08-31 20:54:15 +08:00
```shell
python tools/analysis_tools/analyze_logs.py log.json --keys loss --legend loss
```
2020-12-23 10:36:49 +08:00
2022-08-31 20:54:15 +08:00
### Confusion Matrix (experimental)
In order to generate and plot a `nxn` confusion matrix where `n` is the number of classes, you can follow the steps:
#### 1.Generate a prediction result in pkl format using `test.py`
2020-12-23 10:36:49 +08:00
```shell
2022-08-31 20:54:15 +08:00
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${PATH_TO_RESULT_FILE}]
2020-12-23 10:36:49 +08:00
```
2022-08-31 20:54:15 +08:00
Example:
2020-12-23 10:36:49 +08:00
```shell
2022-08-31 20:54:15 +08:00
python tools/test.py \
configs/fcn/fcn_r50-d8_4xb2-40k_cityscapes-512x1024.py \
checkpoint/fcn_r50-d8_512x1024_40k_cityscapes_20200604_192608-efe53f0d.pth \
--out result/pred_result.pkl
2020-12-23 10:36:49 +08:00
```
2022-08-31 20:54:15 +08:00
#### 2. Use `confusion_matrix.py` to generate and plot a confusion matrix
```shell
python tools/confusion_matrix.py ${CONFIG_FILE} ${PATH_TO_RESULT_FILE} ${SAVE_DIR} --show
2020-12-23 10:36:49 +08:00
```
2021-04-13 02:54:59 +08:00
Description of arguments:
2022-08-31 20:54:15 +08:00
- `config` : Path to the test config file.
- `prediction_path` : Path to the prediction .pkl result.
- `save_dir` : Directory where confusion matrix will be saved.
- `--show` : Enable result visualize.
- `--color-theme` : Theme of the matrix color map.
- `--cfg_options` : Custom options to replace the config file.
2021-04-29 11:38:01 +08:00
2022-08-31 20:54:15 +08:00
Example:
2021-04-29 11:38:01 +08:00
2022-08-31 20:54:15 +08:00
```shell
python tools/confusion_matrix.py \
configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py \
result/pred_result.pkl \
result/confusion_matrix \
--show
2021-04-29 11:38:01 +08:00
```
2022-08-31 20:54:15 +08:00
### Get the FLOPs and params (experimental)
2021-04-19 23:51:49 +08:00
2022-08-31 20:54:15 +08:00
We provide a script adapted from [flops-counter.pytorch ](https://github.com/sovrasov/flops-counter.pytorch ) to compute the FLOPs and params of a given model.
2021-04-19 23:51:49 +08:00
```shell
2022-08-31 20:54:15 +08:00
python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
2021-04-19 23:51:49 +08:00
```
2022-08-31 20:54:15 +08:00
You will get the result like this.
2021-04-22 07:09:59 +08:00
2022-08-31 20:54:15 +08:00
```none
==============================
Input shape: (3, 2048, 1024)
Flops: 1429.68 GMac
Params: 48.98 M
==============================
```
2021-04-19 23:51:49 +08:00
2021-09-16 23:23:50 +08:00
:::{note}
2022-08-31 20:54:15 +08:00
This tool is still experimental and we do not guarantee that the number is correct. You may well use the result for simple comparisons, but double check it before you adopt it in technical reports or papers.
2021-09-16 23:23:50 +08:00
:::
2021-04-19 23:51:49 +08:00
2022-08-31 20:54:15 +08:00
(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 1280, 800).
(2) Some operators are not counted into FLOPs like GN and custom operators.
2021-05-12 11:02:27 +08:00
2022-08-31 20:54:15 +08:00
## Miscellaneous
2021-05-12 11:02:27 +08:00
2022-08-31 20:54:15 +08:00
### Publish a model
2021-05-12 11:02:27 +08:00
2022-08-31 20:54:15 +08:00
Before you upload a model to AWS, you may want to
(1) convert model weights to CPU tensors, (2) delete the optimizer states and
(3) compute the hash of the checkpoint file and append the hash id to the filename.
2021-05-12 11:02:27 +08:00
2022-08-31 20:54:15 +08:00
```shell
python tools/misc/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
2021-05-12 11:02:27 +08:00
```
2022-08-31 20:54:15 +08:00
E.g.,
2021-05-12 11:02:27 +08:00
2022-08-31 20:54:15 +08:00
```shell
python tools/publish_model.py work_dirs/pspnet/latest.pth psp_r50_512x1024_40k_cityscapes.pth
```
2021-05-12 11:02:27 +08:00
2022-08-31 20:54:15 +08:00
The final output filename will be `psp_r50_512x1024_40k_cityscapes-{hash id}.pth` .
2020-12-23 10:36:49 +08:00
### Print the entire config
2022-08-31 20:54:15 +08:00
`tools/misc/print_config.py` prints the whole config verbatim, expanding all its
2022-07-05 15:58:48 +08:00
imports.
2020-12-23 10:36:49 +08:00
```shell
2022-08-31 20:54:15 +08:00
python tools/misc/print_config.py \
2021-04-22 07:09:59 +08:00
${CONFIG} \
--graph \
2021-12-14 19:11:52 +08:00
--cfg-options ${OPTIONS [OPTIONS...]} \
2020-12-23 10:36:49 +08:00
```
2021-03-22 13:05:32 +08:00
2021-04-22 07:09:59 +08:00
Description of arguments:
- `config` : The path of a pytorch model config file.
- `--graph` : Determines whether to print the models graph.
2021-12-14 19:11:52 +08:00
- `--cfg-options` : Custom options to replace the config file.
2021-04-22 07:09:59 +08:00
2022-08-31 20:54:15 +08:00
## Model conversion
2021-08-18 09:42:42 +08:00
`tools/model_converters/` provide several scripts to convert pretrain models released by other repos to MMSegmentation style.
2022-08-31 20:54:15 +08:00
### ViT Swin MiT Transformer Models
2021-08-18 09:42:42 +08:00
- ViT
`tools/model_converters/vit2mmseg.py` convert keys in timm pretrained vit models to MMSegmentation style.
```shell
python tools/model_converters/vit2mmseg.py ${SRC} ${DST}
```
- Swin
`tools/model_converters/swin2mmseg.py` convert keys in official pretrained swin models to MMSegmentation style.
```shell
python tools/model_converters/swin2mmseg.py ${SRC} ${DST}
```
- SegFormer
`tools/model_converters/mit2mmseg.py` convert keys in official pretrained mit models to MMSegmentation style.
```shell
python tools/model_converters/mit2mmseg.py ${SRC} ${DST}
```
2021-07-05 21:11:47 +08:00
## Model Serving
In order to serve an `MMSegmentation` model with [`TorchServe` ](https://pytorch.org/serve/ ), you can follow the steps:
### 1. Convert model from MMSegmentation to TorchServe
```shell
2021-09-23 19:14:52 +08:00
python tools/torchserve/mmseg2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
2021-07-05 21:11:47 +08:00
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
```
2021-09-16 23:23:50 +08:00
:::{note}
${MODEL_STORE} needs to be an absolute path to a folder.
:::
2021-07-05 21:11:47 +08:00
### 2. Build `mmseg-serve` docker image
```shell
docker build -t mmseg-serve:latest docker/serve/
```
### 3. Run `mmseg-serve`
Check the official docs for [running TorchServe with docker ](https://github.com/pytorch/serve/blob/master/docker/README.md#running-torchserve-in-a-production-docker-environment ).
In order to run in GPU, you need to install [nvidia-docker ](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html ). You can omit the `--gpus` argument in order to run in CPU.
Example:
```shell
docker run --rm \
--cpus 8 \
--gpus device=0 \
-p8080:8080 -p8081:8081 -p8082:8082 \
--mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \
mmseg-serve:latest
```
2021-08-12 16:23:47 +08:00
[Read the docs ](https://github.com/pytorch/serve/blob/072f5d088cce9bb64b2a18af065886c9b01b317b/docs/rest_api.md ) about the Inference (8080), Management (8081) and Metrics (8082) APIs
2021-07-05 21:11:47 +08:00
### 4. Test deployment
```shell
curl -O https://raw.githubusercontent.com/open-mmlab/mmsegmentation/master/resources/3dogs.jpg
curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T 3dogs.jpg -o 3dogs_mask.png
```
The response will be a ".png" mask.
You can visualize the output as follows:
```python
import matplotlib.pyplot as plt
import mmcv
plt.imshow(mmcv.imread("3dogs_mask.png", "grayscale"))
plt.show()
```
You should see something similar to:
2022-02-07 23:20:29 +08:00

2021-09-23 19:14:52 +08:00
And you can use `test_torchserve.py` to compare result of torchserve and pytorch, and visualize them.
```shell
python tools/torchserve/test_torchserve.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--result-image ${RESULT_IMAGE}] [--device ${DEVICE}]
```
Example:
```shell
python tools/torchserve/test_torchserve.py \
demo/demo.png \
configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py \
checkpoint/fcn_r50-d8_512x1024_40k_cityscapes_20200604_192608-efe53f0d.pth \
fcn
```