mmsegmentation/docs/en/advanced_guides/transforms.md

90 lines
2.5 KiB
Markdown
Raw Normal View History

2022-08-23 14:19:11 +08:00
# Data Transforms
2020-07-07 20:52:19 +08:00
## Design of Data pipelines
Following typical conventions, we use `Dataset` and `DataLoader` for data loading
with multiple workers. `Dataset` returns a dict of data items corresponding
the arguments of models' forward method.
Since the data in semantic segmentation may not be the same size,
we introduce a new `DataContainer` type in MMCV to help collect and distribute
data of different size.
See [here](https://github.com/open-mmlab/mmcv/blob/master/mmcv/parallel/data_container.py) for more details.
The data preparation pipeline and the dataset is decomposed. Usually a dataset
defines how to process the annotations and a data pipeline defines all the steps to prepare a data dict.
A pipeline consists of a sequence of operations. Each operation takes a dict as input and also output a dict for the next transform.
The operations are categorized into data loading, pre-processing, formatting and test-time augmentation.
Here is an pipeline example for PSPNet.
```python
crop_size = (512, 1024)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
2022-08-23 14:19:11 +08:00
dict(
type='RandomResize',
scale=(2048, 1024),
ratio_range=(0.5, 2.0),
keep_ratio=True),
2020-07-07 20:52:19 +08:00
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
2022-08-23 14:19:11 +08:00
dict(type='RandomFlip', prob=0.5),
2020-07-07 20:52:19 +08:00
dict(type='PhotoMetricDistortion'),
2022-08-23 14:19:11 +08:00
dict(type='PackSegInputs')
2020-07-07 20:52:19 +08:00
]
test_pipeline = [
dict(type='LoadImageFromFile'),
2022-08-23 14:19:11 +08:00
dict(type='Resize', scale=(2048, 1024), keep_ratio=True),
# add loading annotation after ``Resize`` because ground truth
# does not need to do resize data transform
dict(type='LoadAnnotations'),
dict(type='PackSegInputs')
2020-07-07 20:52:19 +08:00
]
```
For each operation, we list the related dict fields that are added/updated/removed.
2022-08-23 14:19:11 +08:00
Before pipelines, the information we can directly obtain from the datasets are img_path, seg_map_path.
2020-07-07 20:52:19 +08:00
### Data loading
`LoadImageFromFile`
2020-07-07 20:52:19 +08:00
- add: img, img_shape, ori_shape
`LoadAnnotations`
2022-08-23 14:19:11 +08:00
- add: seg_fields, gt_seg_map
2020-07-07 20:52:19 +08:00
### Pre-processing
2022-08-23 14:19:11 +08:00
`RandomResize`
2022-08-23 14:19:11 +08:00
- add: scale, scale_factor, keep_ratio
- update: img, img_shape, gt_seg_map
2020-07-07 20:52:19 +08:00
2022-08-23 14:19:11 +08:00
`Resize`
2022-08-23 14:19:11 +08:00
- add: scale, scale_factor, keep_ratio
- update: img, gt_seg_map, img_shape
2020-07-07 20:52:19 +08:00
`RandomCrop`
2022-08-23 14:19:11 +08:00
- update: img, pad_shape, gt_seg_map
2020-07-07 20:52:19 +08:00
2022-08-23 14:19:11 +08:00
`RandomFlip`
2022-08-23 14:19:11 +08:00
- add: flip, flip_direction
- update: img, gt_seg_map
2020-07-07 20:52:19 +08:00
`PhotoMetricDistortion`
2020-07-07 20:52:19 +08:00
- update: img
### Formatting
2022-08-23 14:19:11 +08:00
`PackSegInputs`
2020-07-07 20:52:19 +08:00
2022-08-23 14:19:11 +08:00
- add: inputs, data_sample
- remove: keys specified by `meta_keys` (merged into the metainfo of data_sample), all other keys