Jerry Jiarui XU 0529952270
[Doc] Add Chinese Documentation (#666)
* Add chinese doc base (#593)

* [Doc] Add Chinese doc for useful_tools_md (#642)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* pipeline

* cus_model

* cus_model

* cus_model

* runtime_md

* dataset_prepare

* useful_tools

* refine

* Update useful_tools.md

* Update useful_tools.md

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* [Doc] Add Chinese doc for get_started (#615)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* get_started_md

* refine_get_started_md

* [Doc] Add Chinese doc for tutorial03_tutorial_datapipeline_md (#629)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* pipeline

* refine

* Update data_pipeline.md

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* [Doc] Add Chinese doc for tutorials04_customized_models_md (#630)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* pipeline

* cus_model

* cus_model

* cus_model

* refine

* refine

* Update customize_models.md

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* [Doc] Add Chinese doc for dataset_prepare_md (#640)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* pipeline

* cus_model

* cus_model

* cus_model

* runtime_md

* dataset_prepare

* Update dataset_prepare.md

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* [Doc] Add Chinese doc for tutorials05_training_tricks_md (#631)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* pipeline

* cus_model

* cus_model

* cus_model

* traning tricks md

* traning tricks md

* refine

* refine

* refine

* Update training_tricks.md

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* [Doc] Add Chinese doc for tutorials06_customized_runtime_md (#637)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* pipeline

* cus_model

* cus_model

* cus_model

* runtime_md

* Update customize_runtime.md

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* [Doc] Add Chinese doc for tutorials01_config_md (#628)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* new_config_md

* new_config_md1

* new_config_md1

* refine

* refine

* Update config.md

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* [Doc] Add Chinese for modelzoo (#597)

* [Doc] Add Chinese for modelzoo

* add missing

* [Doc] Add Chinese doc for tutorial02_customized_dataset_md (#620)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* tutorial_customized_dataset

* refine

* Update customize_datasets.md

* fixconflict

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* [Doc] Add Chinese doc for train.md (#616)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* train_md

* refine

* refine_last

* refine_last

* refine_last

* refine_last

* refine_last

* temp

* refine_last

* qwe

Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local>

* [Doc] Add Chinese doc for inference.md (#617)

* get_started_docs_zh

* inference_zh.md

* train_zh.md

* get_started_zh.md

* train_zh.md

* get_started_zh

* fix nospace between ZH and ENG

* change README_zh-CN link

* checkout space again

* checkout space again

* checkout space again

* inference_zh_md

* Update docs_zh-CN/inference.md

Directly delete this sentence?

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* qwe

* temp

* qw

* Update docs_zh-CN/inference.md

* Update docs_zh-CN/inference.md

* Update docs_zh-CN/inference.md

* Update docs_zh-CN/inference.md

* Update docs_zh-CN/inference.md

* Update inference.md

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* fixed some dir

* fixed typo

Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local>
2021-07-03 08:54:32 -07:00

167 lines
4.6 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# 教程 3: 自定义数据流程
## 数据流程的设计
按照通常的惯例,我们使用 `Dataset``DataLoader` 做多线程的数据加载。`Dataset` 返回一个数据内容的字典,里面对应于模型前传方法的各个参数。
因为在语义分割中,输入的图像数据具有不同的大小,我们在 MMCV 里引入一个新的 `DataContainer` 类别去帮助收集和分发不同大小的输入数据。
更多细节,请查看[这里](https://github.com/open-mmlab/mmcv/blob/master/mmcv/parallel/data_container.py).
数据的准备流程和数据集是解耦的。通常一个数据集定义了如何处理标注数据annotations信息而一个数据流程定义了准备一个数据字典的所有步骤。一个流程包括了一系列操作每个操作里都把一个字典作为输入然后再输出一个新的字典给下一个变换操作。
这些操作可分为数据加载 (data loading),预处理 (pre-processing),格式变化 (formatting) 和测试时数据增强 (test-time augmentation) 。
下面的例子就是 PSPNet 的一个流程:
```python
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 1024)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2048, 1024),
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
```
对于每个操作,我们列出它添加、更新、移除的相关字典域 (dict fields)
### 数据加载 Data loading
`LoadImageFromFile`
- 增加: img, img_shape, ori_shape
`LoadAnnotations`
- 增加: gt_semantic_seg, seg_fields
### 预处理 Pre-processing
`Resize`
- 增加: scale, scale_idx, pad_shape, scale_factor, keep_ratio
- 更新: img, img_shape, *seg_fields
`RandomFlip`
- 增加: flip
- 更新: img, *seg_fields
`Pad`
- 增加: pad_fixed_size, pad_size_divisor
- 更新: img, pad_shape, *seg_fields
`RandomCrop`
- 更新: img, pad_shape, *seg_fields
`Normalize`
- 增加: img_norm_cfg
- 更新: img
`SegRescale`
- 更新: gt_semantic_seg
`PhotoMetricDistortion`
- 更新: img
### 格式 Formatting
`ToTensor`
- 更新: 由 `keys` 指定.
`ImageToTensor`
- 更新: 由 `keys` 指定.
`Transpose`
- 更新: 由 `keys` 指定.
`ToDataContainer`
- 更新: 由 `keys` 指定.
`DefaultFormatBundle`
- 更新: img, gt_semantic_seg
`Collect`
- 增加: img_meta (the keys of img_meta is specified by `meta_keys`)
- 移除: all other keys except for those specified by `keys`
### 测试时数据增强 Test time augmentation
`MultiScaleFlipAug`
## 拓展和使用自定义的流程
1. 在任何一个文件里写一个新的流程,例如 `my_pipeline.py`。它以一个字典作为输入并且输出一个字典。
```python
from mmseg.datasets import PIPELINES
@PIPELINES.register_module()
class MyTransform:
def __call__(self, results):
results['dummy'] = True
return results
```
2. 导入一个新类
```python
from .my_pipeline import MyTransform
```
3. 在配置文件里使用它
```python
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 1024)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
dict(type='MyTransform'),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
]
```