CodeCamp #139 [Feature] Support REFUGE dataset. (#2554)

## Motivation 
Add REFUGE datasets
Old PR: https://github.com/open-mmlab/mmsegmentation/pull/2420

---------

Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
pull/2569/head
Andrew Lau 2023-02-03 16:02:19 +08:00 committed by GitHub
parent 7ac0888d9f
commit 49b062e365
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 391 additions and 54 deletions

View File

@ -0,0 +1,90 @@
# dataset settings
dataset_type = 'REFUGEDataset'
data_root = 'data/REFUGE'
train_img_scale = (2056, 2124)
val_img_scale = (1634, 1634)
test_img_scale = (1634, 1634)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', reduce_zero_label=False),
dict(
type='RandomResize',
scale=train_img_scale,
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs')
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=val_img_scale, keep_ratio=True),
# add loading annotation after ``Resize`` because ground truth
# does not need to do resize data transform
dict(type='LoadAnnotations', reduce_zero_label=False),
dict(type='PackSegInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=test_img_scale, keep_ratio=True),
# add loading annotation after ``Resize`` because ground truth
# does not need to do resize data transform
dict(type='LoadAnnotations', reduce_zero_label=False),
dict(type='PackSegInputs')
]
img_ratios = [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
tta_pipeline = [
dict(type='LoadImageFromFile', backend_args=dict(backend='local')),
dict(
type='TestTimeAug',
transforms=[
[
dict(type='Resize', scale_factor=r, keep_ratio=True)
for r in img_ratios
],
[
dict(type='RandomFlip', prob=0., direction='horizontal'),
dict(type='RandomFlip', prob=1., direction='horizontal')
], [dict(type='LoadAnnotations')], [dict(type='PackSegInputs')]
])
]
train_dataloader = dict(
batch_size=4,
num_workers=4,
persistent_workers=True,
sampler=dict(type='InfiniteSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path='images/training', seg_map_path='annotations/training'),
pipeline=train_pipeline))
val_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path='images/validation',
seg_map_path='annotations/validation'),
pipeline=val_pipeline))
test_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path='images/test', seg_map_path='annotations/test'),
pipeline=val_pipeline))
val_evaluator = dict(type='IoUMetric', iou_metrics=['mDice'])
test_evaluator = val_evaluator

View File

@ -145,6 +145,15 @@ mmsegmentation
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── REFUGE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ │ ├── test
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ │ │ ├── test
```
### Cityscapes
@ -330,7 +339,7 @@ For Potsdam dataset, please run the following command to download and re-organiz
python tools/dataset_converters/potsdam.py /path/to/potsdam
```
In our default setting, it will generate 3,456 images for training and 2,016 images for validation.
In our default setting, it will generate 3456 images for training and 2016 images for validation.
### ISPRS Vaihingen
@ -383,7 +392,7 @@ You may need to follow the following structure for dataset preparation after dow
python tools/dataset_converters/isaid.py /path/to/iSAID
```
In our default setting (`patch_width`=896, `patch_height`=896, `overlap_area`=384), it will generate 33,978 images for training and 11,644 images for validation.
In our default setting (`patch_width`=896, `patch_height`=896, `overlap_area`=384), it will generate 33978 images for training and 11644 images for validation.
## LIP(Look Into Person) dataset
@ -436,7 +445,7 @@ cd ./RawData/Training
Then create `train.txt` and `val.txt` to split dataset.
According to TransUNet, the following is the data set division.
According to TransUnet, the following is the data set division.
train.txt
@ -500,7 +509,45 @@ Then, use this command to convert synapse dataset.
python tools/dataset_converters/synapse.py --dataset-path /path/to/synapse
```
In our default setting, it will generate 2,211 2D images for training and 1,568 2D images for validation.
Noted that MMSegmentation default evaluation metric (such as mean dice value) is calculated on 2D slice image,
which is not comparable to results of 3D scan in some paper such as [TransUNet](https://arxiv.org/abs/2102.04306).
### REFUGE
Register in [REFUGE Challenge](https://refuge.grand-challenge.org) and download [REFUGE dataset](https://refuge.grand-challenge.org/REFUGE2Download).
Then, unzip `REFUGE2.zip` and the contents of original datasets include:
```none
├── REFUGE2
│ ├── REFUGE2
│ │ ├── Annotation-Training400.zip
│ │ ├── REFUGE-Test400.zip
│ │ ├── REFUGE-Test-GT.zip
│ │ ├── REFUGE-Training400.zip
│ │ ├── REFUGE-Validation400.zip
│ │ ├── REFUGE-Validation400-GT.zip
│ ├── __MACOSX
```
Please run the following command to convert REFUGE dataset:
```shell
python tools/convert_datasets/refuge.py --raw_data_root=/path/to/refuge/REFUGE2/REFUGE2
```
The script will make directory structure below:
```none
│ ├── REFUGE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ │ ├── test
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ │ │ ├── test
```
It includes 400 images for training, 400 images for validation and 400 images for testing which is the same as REFUGE 2018 dataset.

View File

@ -1,6 +1,6 @@
## 准备数据集(待更新)
推荐用软链接, 将数据集根目录链接到 `$MMSEGMENTATION/data` 里. 如果您的文件夹结构是不同的, 您也许可以试着修改配置文件里对应的路径.
推荐用软链接,将数据集根目录链接到 `$MMSEGMENTATION/data` 里。如果您的文件夹结构是不同的,您也许可以试着修改配置文件里对应的路径。
```none
mmsegmentation
@ -126,51 +126,60 @@ mmsegmentation
│ │ ├── ann_dir
│ │ │ ├── train
│ │ │ ├── val
│ ├── REFUGE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ │ ├── test
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ │ │ ├── test
```
### Cityscapes
注册成功后, 数据集可以在 [这里](https://www.cityscapes-dataset.com/downloads/) 下载.
注册成功后,数据集可以在 [这里](https://www.cityscapes-dataset.com/downloads/) 下载。
通常情况下, `**labelTrainIds.png` 被用来训练 cityscapes.
通常情况下`**labelTrainIds.png` 被用来训练 cityscapes。
基于 [cityscapesscripts](https://github.com/mcordts/cityscapesScripts),
我们提供了一个 [脚本](https://github.com/open-mmlab/mmsegmentation/blob/master/tools/convert_datasets/cityscapes.py),
去生成 `**labelTrainIds.png`.
去生成 `**labelTrainIds.png`
```shell
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略.
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略。
python tools/convert_datasets/cityscapes.py data/cityscapes --nproc 8
```
### Pascal VOC
Pascal VOC 2012 可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) 下载.
此外, 许多最近在 Pascal VOC 数据集上的工作都会利用增广的数据, 它们可以在 [这里](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz) 找到.
Pascal VOC 2012 可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar) 下载
此外,许多最近在 Pascal VOC 数据集上的工作都会利用增广的数据,它们可以在 [这里](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz) 找到
如果您想使用增广后的 VOC 数据集, 请运行下面的命令来将数据增广的标注转成正确的格式.
如果您想使用增广后的 VOC 数据集,请运行下面的命令来将数据增广的标注转成正确的格式。
```shell
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略.
# --nproc 8 意味着有 8 个进程用来转换,它也可以被忽略。
python tools/convert_datasets/voc_aug.py data/VOCdevkit data/VOCdevkit/VOCaug --nproc 8
```
关于如何拼接数据集 (concatenate) 并一起训练它们, 更多细节请参考 [拼接连接数据集](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/tutorials/customize_datasets.md#%E6%8B%BC%E6%8E%A5%E6%95%B0%E6%8D%AE%E9%9B%86) .
关于如何拼接数据集 (concatenate) 并一起训练它们更多细节请参考 [拼接连接数据集](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/tutorials/customize_datasets.md#%E6%8B%BC%E6%8E%A5%E6%95%B0%E6%8D%AE%E9%9B%86)
### ADE20K
ADE20K 的训练集和验证集可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip) 下载.
您还可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/release_test.zip) 下载验证集.
ADE20K 的训练集和验证集可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip) 下载
您还可以在 [这里](http://data.csail.mit.edu/places/ADEchallenge/release_test.zip) 下载验证集
### Pascal Context
Pascal Context 的训练集和验证集可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2010/VOCtrainval_03-May-2010.tar) 下载.
注册成功后, 您还可以在 [这里](http://host.robots.ox.ac.uk:8080/eval/downloads/VOC2010test.tar) 下载验证集.
Pascal Context 的训练集和验证集可以在 [这里](http://host.robots.ox.ac.uk/pascal/VOC/voc2010/VOCtrainval_03-May-2010.tar) 下载
注册成功后您还可以在 [这里](http://host.robots.ox.ac.uk:8080/eval/downloads/VOC2010test.tar) 下载验证集
为了从原始数据集里切分训练集和验证集, 您可以在 [这里](https://codalabuser.blob.core.windows.net/public/trainval_merged.json)
下载 trainval_merged.json.
为了从原始数据集里切分训练集和验证集 您可以在 [这里](https://codalabuser.blob.core.windows.net/public/trainval_merged.json)
下载 trainval_merged.json
如果您想使用 Pascal Context 数据集,
请安装 [细节](https://github.com/zhanghang1989/detail-api) 然后再运行如下命令来把标注转换成正确的格式.
如果您想使用 Pascal Context 数据集
请安装 [细节](https://github.com/zhanghang1989/detail-api) 然后再运行如下命令来把标注转换成正确的格式
```shell
python tools/convert_datasets/pascal_context.py data/VOCdevkit data/VOCdevkit/VOC2010/trainval_merged.json
@ -178,64 +187,64 @@ python tools/convert_datasets/pascal_context.py data/VOCdevkit data/VOCdevkit/VO
### CHASE DB1
CHASE DB1 的训练集和验证集可以在 [这里](https://staffnet.kingston.ac.uk/~ku15565/CHASE_DB1/assets/CHASEDB1.zip) 下载.
CHASE DB1 的训练集和验证集可以在 [这里](https://staffnet.kingston.ac.uk/~ku15565/CHASE_DB1/assets/CHASEDB1.zip) 下载
为了将 CHASE DB1 数据集转换成 MMSegmentation 的格式,您需要运行如下命令:
为了将 CHASE DB1 数据集转换成 MMSegmentation 的格式,您需要运行如下命令:
```shell
python tools/convert_datasets/chase_db1.py /path/to/CHASEDB1.zip
```
这个脚本将自动生成正确的文件夹结构.
这个脚本将自动生成正确的文件夹结构
### DRIVE
DRIVE 的训练集和验证集可以在 [这里](https://drive.grand-challenge.org/) 下载.
在此之前, 您需要注册一个账号, 当前 '1st_manual' 并未被官方提供, 因此需要您从其他地方获取.
DRIVE 的训练集和验证集可以在 [这里](https://drive.grand-challenge.org/) 下载
在此之前,您需要注册一个账号,当前 '1st_manual' 并未被官方提供,因此需要您从其他地方获取。
为了将 DRIVE 数据集转换成 MMSegmentation 格式, 您需要运行如下命令:
为了将 DRIVE 数据集转换成 MMSegmentation 格式,您需要运行如下命令:
```shell
python tools/convert_datasets/drive.py /path/to/training.zip /path/to/test.zip
```
这个脚本将自动生成正确的文件夹结构.
这个脚本将自动生成正确的文件夹结构
### HRF
首先, 下载 [healthy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy.zip) [glaucoma.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma.zip), [diabetic_retinopathy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy.zip), [healthy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy_manualsegm.zip), [glaucoma_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma_manualsegm.zip) 以及 [diabetic_retinopathy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy_manualsegm.zip).
首先下载 [healthy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy.zip) [glaucoma.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma.zip), [diabetic_retinopathy.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy.zip), [healthy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/healthy_manualsegm.zip), [glaucoma_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/glaucoma_manualsegm.zip) 以及 [diabetic_retinopathy_manualsegm.zip](https://www5.cs.fau.de/fileadmin/research/datasets/fundus-images/diabetic_retinopathy_manualsegm.zip)
为了将 HRF 数据集转换成 MMSegmentation 格式, 您需要运行如下命令:
为了将 HRF 数据集转换成 MMSegmentation 格式,您需要运行如下命令:
```shell
python tools/convert_datasets/hrf.py /path/to/healthy.zip /path/to/healthy_manualsegm.zip /path/to/glaucoma.zip /path/to/glaucoma_manualsegm.zip /path/to/diabetic_retinopathy.zip /path/to/diabetic_retinopathy_manualsegm.zip
```
这个脚本将自动生成正确的文件夹结构.
这个脚本将自动生成正确的文件夹结构
### STARE
首先, 下载 [stare-images.tar](http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar), [labels-ah.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar) 和 [labels-vk.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-vk.tar).
首先下载 [stare-images.tar](http://cecas.clemson.edu/~ahoover/stare/probing/stare-images.tar), [labels-ah.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-ah.tar) 和 [labels-vk.tar](http://cecas.clemson.edu/~ahoover/stare/probing/labels-vk.tar)
为了将 STARE 数据集转换成 MMSegmentation 格式, 您需要运行如下命令:
为了将 STARE 数据集转换成 MMSegmentation 格式,您需要运行如下命令:
```shell
python tools/convert_datasets/stare.py /path/to/stare-images.tar /path/to/labels-ah.tar /path/to/labels-vk.tar
```
这个脚本将自动生成正确的文件夹结构.
这个脚本将自动生成正确的文件夹结构
### Dark Zurich
因为我们只支持在此数据集上测试模型, 所以您只需下载[验证集](https://data.vision.ee.ethz.ch/csakarid/shared/GCMA_UIoU/Dark_Zurich_val_anon.zip).
因为我们只支持在此数据集上测试模型所以您只需下载[验证集](https://data.vision.ee.ethz.ch/csakarid/shared/GCMA_UIoU/Dark_Zurich_val_anon.zip)
### Nighttime Driving
因为我们只支持在此数据集上测试模型,所以您只需下载[测试集](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip).
因为我们只支持在此数据集上测试模型所以您只需下载[测试集](http://data.vision.ee.ethz.ch/daid/NighttimeDriving/NighttimeDrivingTest.zip)
### LoveDA
可以从 Google Drive 里下载 [LoveDA数据集](https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing).
可以从 Google Drive 里下载 [LoveDA数据集](https://drive.google.com/drive/folders/1ibYV0qwn4yuuh068Rnc-w4tPi0U0c-ti?usp=sharing)
或者它还可以从 [zenodo](https://zenodo.org/record/5706578#.YZvN7SYRXdF) 下载, 您需要运行如下命令:
@ -248,46 +257,46 @@ wget https://zenodo.org/record/5706578/files/Val.zip
wget https://zenodo.org/record/5706578/files/Test.zip
```
对于 LoveDA 数据集,请运行以下命令下载并重新组织数据集:
对于 LoveDA 数据集请运行以下命令下载并重新组织数据集
```shell
python tools/convert_datasets/loveda.py /path/to/loveDA
```
请参照 [这里](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/inference.md) 来使用训练好的模型去预测 LoveDA 测试集并且提交到官网.
请参照 [这里](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/zh_cn/inference.md) 来使用训练好的模型去预测 LoveDA 测试集并且提交到官网
关于 LoveDA 的更多细节可以在[这里](https://github.com/Junjue-Wang/LoveDA) 找到.
关于 LoveDA 的更多细节可以在[这里](https://github.com/Junjue-Wang/LoveDA) 找到
### ISPRS Potsdam
[Potsdam](https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-potsdam/)
数据集是一个有着2D 语义分割内容标注的城市遥感数据集.
数据集可以从挑战[主页](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/) 获得.
需要其中的 `2_Ortho_RGB.zip``5_Labels_all_noBoundary.zip`.
数据集是一个有着2D 语义分割内容标注的城市遥感数据集
数据集可以从挑战[主页](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/) 获得
需要其中的 '2_Ortho_RGB.zip' 和 '5_Labels_all_noBoundary.zip'。
对于 Potsdam 数据集,请运行以下命令下载并重新组织数据集
对于 Potsdam 数据集请运行以下命令下载并重新组织数据集
```shell
python tools/convert_datasets/potsdam.py /path/to/potsdam
```
使用我们默认的配置, 将生成 3,456 张图片的训练集和 2,016 张图片的验证集.
使用我们默认的配置 将生成 3456 张图片的训练集和 2016 张图片的验证集。
### ISPRS Vaihingen
[Vaihingen](https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-vaihingen/)
数据集是一个有着2D 语义分割内容标注的城市遥感数据集.
数据集是一个有着2D 语义分割内容标注的城市遥感数据集
数据集可以从挑战 [主页](https://www2.isprs.org/commissions/comm2/wg4/benchmark/data-request-form/).
需要其中的 'ISPRS_semantic_labeling_Vaihingen.zip' 和 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip'.
需要其中的 'ISPRS_semantic_labeling_Vaihingen.zip' 和 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip'
对于 Vaihingen 数据集, 请运行以下命令下载并重新组织数据集
对于 Vaihingen 数据集请运行以下命令下载并重新组织数据集
```shell
python tools/convert_datasets/vaihingen.py /path/to/vaihingen
```
使用我们默认的配置 (`clip_size`=512, `stride_size`=256), 将生成 344 张图片的训练集和 398 张图片的验证集.
使用我们默认的配置 (`clip_size`=512, `stride_size`=256) 将生成 344 张图片的训练集和 398 张图片的验证集。
### iSAID
@ -297,7 +306,7 @@ iSAID 数据集(训练集/验证集)的注释可以从 [iSAID](https://captain-w
该数据集是一个大规模的实例分割(也可以用于语义分割)的遥感数据集.
下载后, 在数据集转换前, 您需要将数据集文件夹调整成如下格式.
下载后,在数据集转换前,您需要将数据集文件夹调整成如下格式.
```
│ ├── iSAID
@ -404,3 +413,41 @@ python tools/dataset_converters/synapse.py --dataset-path /path/to/synapse
使用我们默认的配置, 将生成 2,211 张 2D 图片的训练集和 1,568 张图片的验证集.
需要注意的是 MMSegmentation 默认的评价指标 (例如平均 Dice 值) 都是基于每帧 2D 图片计算的, 这与基于每套 3D 图片计算评价指标的 [TransUNet](https://arxiv.org/abs/2102.04306) 是不同的.
### REFUGE
在[官网](https://refuge.grand-challenge.org)注册后, 下载 [REFUGE 数据集](https://refuge.grand-challenge.org/REFUGE2Download) `REFUGE2.zip` , 解压后的内容如下:
```none
├── REFUGE2
│ ├── REFUGE2
│ │ ├── Annotation-Training400.zip
│ │ ├── REFUGE-Test400.zip
│ │ ├── REFUGE-Test-GT.zip
│ │ ├── REFUGE-Training400.zip
│ │ ├── REFUGE-Validation400.zip
│ │ ├── REFUGE-Validation400-GT.zip
│ ├── __MACOSX
```
运行如下命令,就可以按照 REFUGE2018 挑战赛划分数据集的标准将数据集切分成训练集、验证集、测试集:
```shell
python tools/convert_datasets/refuge.py --raw_data_root=/path/to/refuge/REFUGE2/REFUGE2
```
这个脚本将自动生成下面的文件夹结构:
```none
│ ├── REFUGE
│ │ ├── images
│ │ │ ├── training
│ │ │ ├── validation
│ │ │ ├── test
│ │ ├── annotations
│ │ │ ├── training
│ │ │ ├── validation
│ │ │ ├── test
```
其中包括 400 张图片的训练集, 400 张图片的验证集和 400 张图片的测试集.

View File

@ -17,6 +17,7 @@ from .loveda import LoveDADataset
from .night_driving import NightDrivingDataset
from .pascal_context import PascalContextDataset, PascalContextDataset59
from .potsdam import PotsdamDataset
from .refuge import REFUGEDataset
from .stare import STAREDataset
from .synapse import SynapseDataset
# yapf: disable
@ -48,5 +49,5 @@ __all__ = [
'DecathlonDataset', 'LIPDataset', 'ResizeShortestEdge',
'BioMedicalGaussianNoise', 'BioMedicalGaussianBlur',
'BioMedicalRandomGamma', 'BioMedical3DPad', 'RandomRotFlip',
'SynapseDataset'
'SynapseDataset', 'REFUGEDataset'
]

View File

@ -0,0 +1,28 @@
# Copyright (c) OpenMMLab. All rights reserved.
import mmengine.fileio as fileio
from mmseg.registry import DATASETS
from .basesegdataset import BaseSegDataset
@DATASETS.register_module()
class REFUGEDataset(BaseSegDataset):
"""REFUGE dataset.
In segmentation map annotation for REFUGE, 0 stands for background, which
is not included in 2 categories. ``reduce_zero_label`` is fixed to True.
The ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to
'.png'.
"""
METAINFO = dict(
classes=('background', ' Optic Cup', 'Optic Disc'),
palette=[[120, 120, 120], [6, 230, 230], [56, 59, 120]])
def __init__(self, **kwargs) -> None:
super().__init__(
img_suffix='.png',
seg_map_suffix='.png',
reduce_zero_label=False,
**kwargs)
assert fileio.exists(
self.data_prefix['img_path'], backend_args=self.backend_args)

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 287 KiB

View File

@ -8,7 +8,8 @@ import pytest
from mmseg.datasets import (ADE20KDataset, BaseSegDataset, CityscapesDataset,
COCOStuffDataset, DecathlonDataset, ISPRSDataset,
LIPDataset, LoveDADataset, PascalVOCDataset,
PotsdamDataset, SynapseDataset, iSAIDDataset)
PotsdamDataset, REFUGEDataset, SynapseDataset,
iSAIDDataset)
from mmseg.registry import DATASETS
from mmseg.utils import get_classes, get_palette
@ -232,6 +233,19 @@ def test_synapse():
assert len(test_dataset) == 2
def test_refuge():
test_dataset = REFUGEDataset(
pipeline=[],
data_prefix=dict(
img_path=osp.join(
osp.dirname(__file__),
'../data/pseudo_refuge_dataset/img_dir'),
seg_map_path=osp.join(
osp.dirname(__file__),
'../data/pseudo_refuge_dataset/ann_dir')))
assert len(test_dataset) == 1
def test_isaid():
test_dataset = iSAIDDataset(
pipeline=[],

View File

@ -0,0 +1,110 @@
# Copyright (c) OpenMMLab. All rights reserved.
import argparse
import os
import os.path as osp
import tempfile
import zipfile
import mmcv
import numpy as np
from mmengine.utils import mkdir_or_exist
def parse_args():
parser = argparse.ArgumentParser(
description='Convert REFUGE dataset to mmsegmentation format')
parser.add_argument('--raw_data_root', help='the root path of raw data')
parser.add_argument('--tmp_dir', help='path of the temporary directory')
parser.add_argument('-o', '--out_dir', help='output path')
args = parser.parse_args()
return args
def extract_img(root: str,
cur_dir: str,
out_dir: str,
mode: str = 'train',
file_type: str = 'img') -> None:
"""_summary_
Args:
Args:
root (str): root where the extracted data is saved
cur_dir (cur_dir): dir where the zip_file exists
out_dir (str): root dir where the data is saved
mode (str, optional): Defaults to 'train'.
file_type (str, optional): Defaults to 'img',else to 'mask'.
"""
zip_file = zipfile.ZipFile(cur_dir)
zip_file.extractall(root)
for cur_dir, dirs, files in os.walk(root):
# filter child dirs and directories with "Illustration" and "MACOSX"
if len(dirs) == 0 and \
cur_dir.split('\\')[-1].find('Illustration') == -1 and \
cur_dir.find('MACOSX') == -1:
file_names = [
file for file in files
if file.endswith('.jpg') or file.endswith('.bmp')
]
for filename in sorted(file_names):
img = mmcv.imread(osp.join(cur_dir, filename))
if file_type == 'annotations':
img = img[:, :, 0]
img[np.where(img == 0)] = 1
img[np.where(img == 128)] = 2
img[np.where(img == 255)] = 0
mmcv.imwrite(
img,
osp.join(out_dir, file_type, mode,
osp.splitext(filename)[0] + '.png'))
def main():
args = parse_args()
raw_data_root = args.raw_data_root
if args.out_dir is None:
out_dir = osp.join('./data', 'REFUGE')
else:
out_dir = args.out_dir
print('Making directories...')
mkdir_or_exist(out_dir)
mkdir_or_exist(osp.join(out_dir, 'images'))
mkdir_or_exist(osp.join(out_dir, 'images', 'training'))
mkdir_or_exist(osp.join(out_dir, 'images', 'validation'))
mkdir_or_exist(osp.join(out_dir, 'images', 'test'))
mkdir_or_exist(osp.join(out_dir, 'annotations'))
mkdir_or_exist(osp.join(out_dir, 'annotations', 'training'))
mkdir_or_exist(osp.join(out_dir, 'annotations', 'validation'))
mkdir_or_exist(osp.join(out_dir, 'annotations', 'test'))
print('Generating images and annotations...')
# process data from the child dir on the first rank
cur_dir, dirs, files = list(os.walk(raw_data_root))[0]
print('====================')
files = list(filter(lambda x: x.endswith('.zip'), files))
with tempfile.TemporaryDirectory(dir=args.tmp_dir) as tmp_dir:
for file in files:
# search data folders for training,validation,test
mode = list(
filter(lambda x: file.lower().find(x) != -1,
['training', 'test', 'validation']))[0]
file_root = osp.join(tmp_dir, file[:-4])
file_type = 'images' if file.find('Anno') == -1 and file.find(
'GT') == -1 else 'annotations'
extract_img(file_root, osp.join(cur_dir, file), out_dir, mode,
file_type)
print('Done!')
if __name__ == '__main__':
main()