mmsegmentation/docs_zh-CN/dataset_prepare.md

231 lines
9.6 KiB
Markdown
Raw Normal View History

[Doc] Add Chinese Documentation (#666) * Add chinese doc base (#593) * [Doc] Add Chinese doc for useful_tools_md (#642) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * useful_tools * refine * Update useful_tools.md * Update useful_tools.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for get_started (#615) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * get_started_md * refine_get_started_md * [Doc] Add Chinese doc for tutorial03_tutorial_datapipeline_md (#629) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * refine * Update data_pipeline.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials04_customized_models_md (#630) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * refine * refine * Update customize_models.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for dataset_prepare_md (#640) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * Update dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials05_training_tricks_md (#631) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * traning tricks md * traning tricks md * refine * refine * refine * Update training_tricks.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials06_customized_runtime_md (#637) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * Update customize_runtime.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials01_config_md (#628) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * new_config_md * new_config_md1 * new_config_md1 * refine * refine * Update config.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese for modelzoo (#597) * [Doc] Add Chinese for modelzoo * add missing * [Doc] Add Chinese doc for tutorial02_customized_dataset_md (#620) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * tutorial_customized_dataset * refine * Update customize_datasets.md * fixconflict Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for train.md (#616) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * train_md * refine * refine_last * refine_last * refine_last * refine_last * refine_last * temp * refine_last * qwe Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local> * [Doc] Add Chinese doc for inference.md (#617) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * inference_zh_md * Update docs_zh-CN/inference.md Directly delete this sentence? Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * qwe * temp * qw * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update inference.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * fixed some dir * fixed typo Co-authored-by: MengzhangLI <mcmong@pku.edu.cn> Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local>
2021-07-03 23:54:32 +08:00
## 准备数据集
推荐用软链接,将数据集根目录链接到 `$MMSEGMENTATION/data` 里。如果您的文件夹结构是不同的,您也许可以试着修改配置文件里对应的路径。
```none
mmsegmentation
├── mmseg
├── tools
├── configs
├── data
│ ├── cityscapes
│ │ ├── leftImg8bit
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── gtFine
│ │ │ ├── train
│ │ │ ├── val
│ ├── VOCdevkit
│ │ ├── VOC2012
│ │ │ ├── JPEGImages
│ │ │ ├── SegmentationClass
│ │ │ ├── ImageSets
│ │ │ │ ├── Segmentation
│ │ ├── VOC2010
│ │ │ ├── JPEGImages
│ │ │ ├── SegmentationClassContext
│ │ │ ├── ImageSets
│ │ │ │ ├── SegmentationContext
│ │ │ │ │ ├── train.txt
│ │ │ │ │ ├── val.txt
│ │ │ ├── trainval_merged.json
│ │ ├── VOCaug
│ │ │ ├── dataset
│ │ │ │ ├── cls
│ ├── ade
│ │ ├── ADEChallengeData2016
│ │ │ ├── annotations
│ │ │ │ ├── training
│ │ │ │ ├── validation
│ │ │ ├── images
│ │ │ │ ├── training
│ │ │ │ ├── validation
│ ├── CHASE_DB1
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ ├── DRIVE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ ├── HRF
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ ├── STARE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
| ├── dark_zurich
| │   ├── gps
| │   │   ├── val
| │   │   └── val_ref
| │   ├── gt
| │   │   └── val
| │   ├── LICENSE.txt
| │   ├── lists_file_names
| │   │   ├── val_filenames.txt
| │   │   └── val_ref_filenames.txt
| │   ├── README.md
| │   └── rgb_anon
| │   | ├── val
| │   | └── val_ref
| ├── NighttimeDrivingTest
| | ├── gtCoarse_daytime_trainvaltest
| | │   └── test
| | │   └── night
| | └── leftImg8bit
| | | └── test
| | | └── night
[Feature] Support LoveDA dataset (#1028) * update LoveDA dataset api * revised lint errors in dataset_prepare.md * revised lint errors in loveda.py * revised lint errors in loveda.py * revised lint errors in dataset_prepare.md * revised lint errors in dataset_prepare.md * checked with isort and yapf * checked with isort and yapf * checked with isort and yapf * Revert "checked with isort and yapf" This reverts commit 686a51d9 * Revert "checked with isort and yapf" This reverts commit b877e121bb2935ceefc503c09675019489829feb. * Revert "revised lint errors in dataset_prepare.md" This reverts commit 2289e27c * Revert "checked with isort and yapf" This reverts commit 159db2f8 * Revert "checked with isort and yapf" This reverts commit 159db2f8 * add configs & fix bugs * update new branch * upload models&logs and add format-only * change pretraied model path of HRNet * fix the errors in dataset_prepare.md * fix the errors in dataset_prepare.md and configs in loveda.py * change the description in docs_zh-CN/dataset_prepare.md * use init_cfg * fix test converage * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * Update docs/dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * Update docs_zh-CN/dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * Update docs_zh-CN/dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * Delete unused lines of unittest and Add docs * add convert .py file * add downloading links from zenodo * move place of LoveDA and Cityscapes in doc * move place of LoveDA and Cityscapes in doc Co-authored-by: MengzhangLI <mcmong@pku.edu.cn> Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
2021-11-24 19:41:19 +08:00
│ ├── loveDA
│ │ ├── img_dir
│ │ │ ├── train
│ │ │ ├── val
│ │ │ ├── test
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
[Doc] Add Chinese Documentation (#666) * Add chinese doc base (#593) * [Doc] Add Chinese doc for useful_tools_md (#642) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * useful_tools * refine * Update useful_tools.md * Update useful_tools.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for get_started (#615) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * get_started_md * refine_get_started_md * [Doc] Add Chinese doc for tutorial03_tutorial_datapipeline_md (#629) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * refine * Update data_pipeline.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials04_customized_models_md (#630) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * refine * refine * Update customize_models.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for dataset_prepare_md (#640) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * Update dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials05_training_tricks_md (#631) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * traning tricks md * traning tricks md * refine * refine * refine * Update training_tricks.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials06_customized_runtime_md (#637) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * Update customize_runtime.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials01_config_md (#628) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * new_config_md * new_config_md1 * new_config_md1 * refine * refine * Update config.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese for modelzoo (#597) * [Doc] Add Chinese for modelzoo * add missing * [Doc] Add Chinese doc for tutorial02_customized_dataset_md (#620) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * tutorial_customized_dataset * refine * Update customize_datasets.md * fixconflict Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for train.md (#616) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * train_md * refine * refine_last * refine_last * refine_last * refine_last * refine_last * temp * refine_last * qwe Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local> * [Doc] Add Chinese doc for inference.md (#617) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * inference_zh_md * Update docs_zh-CN/inference.md Directly delete this sentence? Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * qwe * temp * qw * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update inference.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * fixed some dir * fixed typo Co-authored-by: MengzhangLI <mcmong@pku.edu.cn> Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local>
2021-07-03 23:54:32 +08:00
```
### Cityscapes
注册成功后,数据集可以在 [这里](https://www.cityscapes-dataset.com/downloads/) 下载。
通常情况下,`**labelTrainIds.png` 被用来训练 cityscapes。
基于 [cityscapesscripts](https://github.com/mcordts/cityscapesScripts),
我们提供了一个 [脚本](https://github.com/open-mmlab/mmsegmentation/blob/master/tools/convert_datasets/cityscapes.py),
去生成 `**labelTrainIds.png`
```shell
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略。
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
```
### Pascal VOC
Pascal VOC 2012 可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) 下载。
此外,许多最近在 Pascal VOC 数据集上的工作都会利用增广的数据,它们可以在 [这里](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz) 找到。
如果您想使用增广后的 VOC 数据集,请运行下面的命令来将数据增广的标注转成正确的格式。
```shell
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略。
python tools/convert_datasets/voc_aug.py data/VOCdevkit data/VOCdevkit/VOCaug --nproc 8
```
关于如何拼接数据集 (concatenate) 并一起训练它们,更多细节请参考 [拼接连接数据集](https://github.com/open-mmlab/mmsegmentation/blob/master/docs_zh-CN/tutorials/customize_datasets.md#%E6%8B%BC%E6%8E%A5%E6%95%B0%E6%8D%AE%E9%9B%86) 。
[Doc] Add Chinese Documentation (#666) * Add chinese doc base (#593) * [Doc] Add Chinese doc for useful_tools_md (#642) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * useful_tools * refine * Update useful_tools.md * Update useful_tools.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for get_started (#615) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * get_started_md * refine_get_started_md * [Doc] Add Chinese doc for tutorial03_tutorial_datapipeline_md (#629) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * refine * Update data_pipeline.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials04_customized_models_md (#630) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * refine * refine * Update customize_models.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for dataset_prepare_md (#640) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * Update dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials05_training_tricks_md (#631) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * traning tricks md * traning tricks md * refine * refine * refine * Update training_tricks.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials06_customized_runtime_md (#637) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * Update customize_runtime.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials01_config_md (#628) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * new_config_md * new_config_md1 * new_config_md1 * refine * refine * Update config.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese for modelzoo (#597) * [Doc] Add Chinese for modelzoo * add missing * [Doc] Add Chinese doc for tutorial02_customized_dataset_md (#620) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * tutorial_customized_dataset * refine * Update customize_datasets.md * fixconflict Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for train.md (#616) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * train_md * refine * refine_last * refine_last * refine_last * refine_last * refine_last * temp * refine_last * qwe Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local> * [Doc] Add Chinese doc for inference.md (#617) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * inference_zh_md * Update docs_zh-CN/inference.md Directly delete this sentence? Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * qwe * temp * qw * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update inference.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * fixed some dir * fixed typo Co-authored-by: MengzhangLI <mcmong@pku.edu.cn> Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local>
2021-07-03 23:54:32 +08:00
### ADE20K
ADE20K 的训练集和验证集可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip) 下载。
您还可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/release_test.zip) 下载验证集。
### Pascal Context
Pascal Context 的训练集和验证集可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2010/VOCtrainval_03-May-2010.tar) 下载。
注册成功后,您还可以在 [这里](http://host.robots.ox.ac.uk:8080/eval/downloads/VOC2010test.tar) 下载验证集。
为了从原始数据集里切分训练集和验证集, 您可以在 [这里](https://codalabuser.blob.core.windows.net/public/trainval_merged.json)
下载 trainval_merged.json。
如果您想使用 Pascal Context 数据集,
请安装 [细节](https://github.com/zhanghang1989/detail-api) 然后再运行如下命令来把标注转换成正确的格式。
```shell
python tools/convert_datasets/pascal_context.py data/VOCdevkit data/VOCdevkit/VOC2010/trainval_merged.json
```
### CHASE DB1
CHASE DB1 的训练集和验证集可以在 [这里](https://staffnet.kingston.ac.uk/~ku15565/CHASE_DB1/assets/CHASEDB1.zip) 下载。
为了将 CHASE DB1 数据集转换成 MMSegmentation 的格式,您需要运行如下命令:
```shell
python tools/convert_datasets/chase_db1.py /path/to/CHASEDB1.zip
```
这个脚本将自动生成正确的文件夹结构。
### DRIVE
DRIVE 的训练集和验证集可以在 [这里](https://drive.grand-challenge.org/) 下载。
在此之前,您需要注册一个账号,当前 '1st_manual' 并未被官方提供,因此需要您从其他地方获取。
为了将 DRIVE 数据集转换成 MMSegmentation 格式,您需要运行如下命令:
```shell
python tools/convert_datasets/drive.py /path/to/training.zip /path/to/test.zip
```
这个脚本将自动生成正确的文件夹结构。
### HRF
Correct docs (#696) * Correct get_started.md * Correct dataset_prepare.md * Correct model_zoo.md * Correct train.md * Correct inference.md * Correct config.md * Correct customize_datasets.md * Correct data_pipeline.md * Correct customize_models.md * Correct training_tricks.md * Correct customize_runtime.md * Correct useful_tools.md and translate "model serving" * Fix typos * fix lint * Modify the content of useful_tools.md to meet the requirements, and modify some of the content by referring to the Chinese documentation of mmcls. * Modify the use_tools.md file based on feedback. Adjusted some translations according to "English-Chinese terminology comparison". * Modify get_start.md . Adjusted some translations according to "English-Chinese terminology comparison". * Modify dataset_prepare.md. * Modify the English version and the Chinese version of model_zoo.md. Adjusted some translations according to "English-Chinese terminology comparison". * Modify train.md. Adjusted some translations according to "English-Chinese terminology comparison". * Modify inference.md. Adjusted some translations according to "English-Chinese terminology comparison". * Modify config.md. Adjusted some translations according to "English-Chinese terminology comparison". * Modify customize_datasets.md. * Modify data_pipeline.md. Adjusted some translations according to "English-Chinese terminology comparison". The main corrected term is: pipeline. * Modify customize_models.md. * Modify training_tricks.md. * Modify customize_runtime.md. Adjusted some translations according to "English-Chinese terminology comparison". * fix full point usage in items * fix typo * fix typo * fix typo * fix typo * Update useful_tools.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
2021-08-05 16:24:01 +08:00
首先,下载 [healthy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy.zip) [glaucoma.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma.zip), [diabetic_retinopathy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy.zip), [healthy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy_manualsegm.zip), [glaucoma_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma_manualsegm.zip) 以及 [diabetic_retinopathy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy_manualsegm.zip) 。
[Doc] Add Chinese Documentation (#666) * Add chinese doc base (#593) * [Doc] Add Chinese doc for useful_tools_md (#642) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * useful_tools * refine * Update useful_tools.md * Update useful_tools.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for get_started (#615) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * get_started_md * refine_get_started_md * [Doc] Add Chinese doc for tutorial03_tutorial_datapipeline_md (#629) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * refine * Update data_pipeline.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials04_customized_models_md (#630) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * refine * refine * Update customize_models.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for dataset_prepare_md (#640) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * Update dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials05_training_tricks_md (#631) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * traning tricks md * traning tricks md * refine * refine * refine * Update training_tricks.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials06_customized_runtime_md (#637) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * Update customize_runtime.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials01_config_md (#628) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * new_config_md * new_config_md1 * new_config_md1 * refine * refine * Update config.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese for modelzoo (#597) * [Doc] Add Chinese for modelzoo * add missing * [Doc] Add Chinese doc for tutorial02_customized_dataset_md (#620) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * tutorial_customized_dataset * refine * Update customize_datasets.md * fixconflict Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for train.md (#616) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * train_md * refine * refine_last * refine_last * refine_last * refine_last * refine_last * temp * refine_last * qwe Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local> * [Doc] Add Chinese doc for inference.md (#617) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * inference_zh_md * Update docs_zh-CN/inference.md Directly delete this sentence? Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * qwe * temp * qw * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update inference.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * fixed some dir * fixed typo Co-authored-by: MengzhangLI <mcmong@pku.edu.cn> Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local>
2021-07-03 23:54:32 +08:00
为了将 HRF 数据集转换成 MMSegmentation 格式,您需要运行如下命令:
```shell
python tools/convert_datasets/hrf.py /path/to/healthy.zip /path/to/healthy_manualsegm.zip /path/to/glaucoma.zip /path/to/glaucoma_manualsegm.zip /path/to/diabetic_retinopathy.zip /path/to/diabetic_retinopathy_manualsegm.zip
```
这个脚本将自动生成正确的文件夹结构。
### STARE
Correct docs (#696) * Correct get_started.md * Correct dataset_prepare.md * Correct model_zoo.md * Correct train.md * Correct inference.md * Correct config.md * Correct customize_datasets.md * Correct data_pipeline.md * Correct customize_models.md * Correct training_tricks.md * Correct customize_runtime.md * Correct useful_tools.md and translate "model serving" * Fix typos * fix lint * Modify the content of useful_tools.md to meet the requirements, and modify some of the content by referring to the Chinese documentation of mmcls. * Modify the use_tools.md file based on feedback. Adjusted some translations according to "English-Chinese terminology comparison". * Modify get_start.md . Adjusted some translations according to "English-Chinese terminology comparison". * Modify dataset_prepare.md. * Modify the English version and the Chinese version of model_zoo.md. Adjusted some translations according to "English-Chinese terminology comparison". * Modify train.md. Adjusted some translations according to "English-Chinese terminology comparison". * Modify inference.md. Adjusted some translations according to "English-Chinese terminology comparison". * Modify config.md. Adjusted some translations according to "English-Chinese terminology comparison". * Modify customize_datasets.md. * Modify data_pipeline.md. Adjusted some translations according to "English-Chinese terminology comparison". The main corrected term is: pipeline. * Modify customize_models.md. * Modify training_tricks.md. * Modify customize_runtime.md. Adjusted some translations according to "English-Chinese terminology comparison". * fix full point usage in items * fix typo * fix typo * fix typo * fix typo * Update useful_tools.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
2021-08-05 16:24:01 +08:00
首先,下载 [stare-images.tar](http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar), [labels-ah.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar) 和 [labels-vk.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-vk.tar) 。
[Doc] Add Chinese Documentation (#666) * Add chinese doc base (#593) * [Doc] Add Chinese doc for useful_tools_md (#642) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * useful_tools * refine * Update useful_tools.md * Update useful_tools.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for get_started (#615) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * get_started_md * refine_get_started_md * [Doc] Add Chinese doc for tutorial03_tutorial_datapipeline_md (#629) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * refine * Update data_pipeline.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials04_customized_models_md (#630) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * refine * refine * Update customize_models.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for dataset_prepare_md (#640) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * dataset_prepare * Update dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials05_training_tricks_md (#631) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * traning tricks md * traning tricks md * refine * refine * refine * Update training_tricks.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials06_customized_runtime_md (#637) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * pipeline * cus_model * cus_model * cus_model * runtime_md * Update customize_runtime.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for tutorials01_config_md (#628) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * new_config_md * new_config_md1 * new_config_md1 * refine * refine * Update config.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese for modelzoo (#597) * [Doc] Add Chinese for modelzoo * add missing * [Doc] Add Chinese doc for tutorial02_customized_dataset_md (#620) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * tutorial_customized_dataset * refine * Update customize_datasets.md * fixconflict Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * [Doc] Add Chinese doc for train.md (#616) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * train_md * refine * refine_last * refine_last * refine_last * refine_last * refine_last * temp * refine_last * qwe Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local> * [Doc] Add Chinese doc for inference.md (#617) * get_started_docs_zh * inference_zh.md * train_zh.md * get_started_zh.md * train_zh.md * get_started_zh * fix nospace between ZH and ENG * change README_zh-CN link * checkout space again * checkout space again * checkout space again * inference_zh_md * Update docs_zh-CN/inference.md Directly delete this sentence? Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * qwe * temp * qw * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update docs_zh-CN/inference.md * Update inference.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * fixed some dir * fixed typo Co-authored-by: MengzhangLI <mcmong@pku.edu.cn> Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> Co-authored-by: yuanzhang <yuanzhang@yuanzhangdeMacBook-Pro.local>
2021-07-03 23:54:32 +08:00
为了将 STARE 数据集转换成 MMSegmentation 格式,您需要运行如下命令:
```shell
python tools/convert_datasets/stare.py /path/to/stare-images.tar /path/to/labels-ah.tar /path/to/labels-vk.tar
```
这个脚本将自动生成正确的文件夹结构。
### Dark Zurich
[Feature] Support LoveDA dataset (#1028) * update LoveDA dataset api * revised lint errors in dataset_prepare.md * revised lint errors in loveda.py * revised lint errors in loveda.py * revised lint errors in dataset_prepare.md * revised lint errors in dataset_prepare.md * checked with isort and yapf * checked with isort and yapf * checked with isort and yapf * Revert "checked with isort and yapf" This reverts commit 686a51d9 * Revert "checked with isort and yapf" This reverts commit b877e121bb2935ceefc503c09675019489829feb. * Revert "revised lint errors in dataset_prepare.md" This reverts commit 2289e27c * Revert "checked with isort and yapf" This reverts commit 159db2f8 * Revert "checked with isort and yapf" This reverts commit 159db2f8 * add configs & fix bugs * update new branch * upload models&logs and add format-only * change pretraied model path of HRNet * fix the errors in dataset_prepare.md * fix the errors in dataset_prepare.md and configs in loveda.py * change the description in docs_zh-CN/dataset_prepare.md * use init_cfg * fix test converage * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * Update docs/dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * Update docs_zh-CN/dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * Update docs_zh-CN/dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * Delete unused lines of unittest and Add docs * add convert .py file * add downloading links from zenodo * move place of LoveDA and Cityscapes in doc * move place of LoveDA and Cityscapes in doc Co-authored-by: MengzhangLI <mcmong@pku.edu.cn> Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
2021-11-24 19:41:19 +08:00
因为我们只支持在此数据集上测试模型,所以您只需下载[验证集](https://data.vision.ee.ethz.ch/csakarid/shared/GCMA_UIoU/Dark_Zurich_val_anon.zip) 。
### Nighttime Driving
[Feature] Support LoveDA dataset (#1028) * update LoveDA dataset api * revised lint errors in dataset_prepare.md * revised lint errors in loveda.py * revised lint errors in loveda.py * revised lint errors in dataset_prepare.md * revised lint errors in dataset_prepare.md * checked with isort and yapf * checked with isort and yapf * checked with isort and yapf * Revert "checked with isort and yapf" This reverts commit 686a51d9 * Revert "checked with isort and yapf" This reverts commit b877e121bb2935ceefc503c09675019489829feb. * Revert "revised lint errors in dataset_prepare.md" This reverts commit 2289e27c * Revert "checked with isort and yapf" This reverts commit 159db2f8 * Revert "checked with isort and yapf" This reverts commit 159db2f8 * add configs & fix bugs * update new branch * upload models&logs and add format-only * change pretraied model path of HRNet * fix the errors in dataset_prepare.md * fix the errors in dataset_prepare.md and configs in loveda.py * change the description in docs_zh-CN/dataset_prepare.md * use init_cfg * fix test converage * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * adding pseudo loveda dataset * Update docs/dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * Update docs_zh-CN/dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * Update docs_zh-CN/dataset_prepare.md Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> * Delete unused lines of unittest and Add docs * add convert .py file * add downloading links from zenodo * move place of LoveDA and Cityscapes in doc * move place of LoveDA and Cityscapes in doc Co-authored-by: MengzhangLI <mcmong@pku.edu.cn> Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
2021-11-24 19:41:19 +08:00
因为我们只支持在此数据集上测试模型,所以您只需下载[测试集](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip) 。
### LoveDA
可以从 Google Drive 里下载 [LoveDA数据集](https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing) 。
或者它还可以从 [zenodo](https://zenodo.org/record/5706578#.YZvN7SYRXdF) 下载, 您需要运行如下命令:
```shell
# Download Train.zip
wget https://zenodo.org/record/5706578/files/Train.zip
# Download Val.zip
wget https://zenodo.org/record/5706578/files/Val.zip
# Download Test.zip
wget https://zenodo.org/record/5706578/files/Test.zip
```
对于 LoveDA 数据集,请运行以下命令下载并重新组织数据集
```shell
python tools/convert_datasets/loveda.py /path/to/loveDA
```
请参照 [这里](https://github.com/open-mmlab/mmsegmentation/blob/master/docs_zh-CN/inference.md) 来使用训练好的模型去预测 LoveDA 测试集并且提交到官网。
关于 LoveDA 的更多细节可以在[这里](https://github.com/Junjue-Wang/LoveDA) 找到。