mmsegmentation/tests/test_apis/test_inferencer.py
zoulinxin 72e20a8854
[Feature] remote sensing inference (#3131)
## Motivation

Supports inference for ultra-large-scale remote sensing images.

## Modification

Add RSImageInference.py in demo.

## Use cases

Taking the inference of Vaihingen dataset images using PSPNet as an
example, the following settings are required:

**img**: Specify the path of the image.
**model**: Provide the configuration file for the model.
**checkpoint**: Specify the weight file for the model.
**out**: Set the output path for the results.
**batch_size**: Determine the batch size used during inference.
**win_size**: Specify the width and height(512x512) of the sliding
window.
**stride**: Set the stride(400x400) for sliding the window.
**thread(default: 1)**: Specify the number of threads to be used for
inference.
**Inference device (default: cuda:0)**: Specify the device for inference
(e.g., cuda:0 for CPU).

```shell
python demo/rs_image_inference.py demo/demo.png projects/pp_mobileseg/configs/pp_mobileseg/pp_mobileseg_mobilenetv3_2x16_80k_ade20k_512x512_tiny.py pp_mobileseg_mobilenetv3_2xb16_3rdparty-tiny_512x512-ade20k-a351ebf5.pth --batch-size 8 --device cpu --thread 2
```

---------

Co-authored-by: xiexinch <xiexinch@outlook.com>
2023-08-31 12:44:46 +08:00

61 lines
1.7 KiB
Python

# Copyright (c) OpenMMLab. All rights reserved.
import tempfile
import numpy as np
import torch
from mmengine import ConfigDict
from utils import * # noqa: F401, F403
from mmseg.apis import MMSegInferencer
from mmseg.registry import MODELS
from mmseg.utils import register_all_modules
def test_inferencer():
register_all_modules()
visualizer = dict(
type='SegLocalVisualizer',
vis_backends=[dict(type='LocalVisBackend')],
name='visualizer')
cfg_dict = dict(
model=dict(
type='InferExampleModel',
data_preprocessor=dict(type='SegDataPreProcessor'),
backbone=dict(type='InferExampleBackbone'),
decode_head=dict(type='InferExampleHead'),
test_cfg=dict(mode='whole')),
visualizer=visualizer,
test_dataloader=dict(
dataset=dict(
type='ExampleDataset',
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='PackSegInputs')
]), ))
cfg = ConfigDict(cfg_dict)
model = MODELS.build(cfg.model)
ckpt = model.state_dict()
ckpt_filename = tempfile.mktemp()
torch.save(ckpt, ckpt_filename)
# test initialization
infer = MMSegInferencer(cfg, ckpt_filename)
# test forward
img = np.random.randint(0, 256, (4, 4, 3))
infer(img)
imgs = [img, img]
infer(imgs)
results = infer(imgs, out_dir=tempfile.gettempdir())
# test results
assert 'predictions' in results
assert 'visualization' in results
assert len(results['predictions']) == 2
assert results['predictions'][0].shape == (4, 4)