diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml index 3ba13e0ce..6eaae3e0d 100644 --- a/.github/ISSUE_TEMPLATE/config.yml +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -1 +1,6 @@ blank_issues_enabled: false + +contact_links: + - name: MMSegmentation Documentation + url: https://mmsegmentation.readthedocs.io + about: Check the docs and FAQ to see if you question is already anwsered. diff --git a/docs/config.md b/docs/config.md index aace67ff9..be9226a60 100644 --- a/docs/config.md +++ b/docs/config.md @@ -363,3 +363,13 @@ data = dict( test=dict(pipeline=test_pipeline)) ``` We first define the new `train_pipeline`/`test_pipeline` and pass them into `data`. + +Similarly, if we would like to switch from `SyncBN` to `BN` or `MMSyncBN`, we need to substitute every `norm_cfg` in the config. +```python +_base_ = '../pspnet/psp_r50_512x1024_40ki_cityscpaes.py' +norm_cfg = dict(type='BN', requires_grad=True) +model = dict( + backbone=dict(norm_cfg=norm_cfg), + decode_head=dict(norm_cfg=norm_cfg), + auxiliary_head=dict(norm_cfg=norm_cfg)) +``` diff --git a/docs/getting_started.md b/docs/getting_started.md index 3a9b65603..16cb3b4f0 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -132,6 +132,8 @@ Assume that you have already downloaded the checkpoints to the directory `checkp checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \ 4 --out results.pkl --eval mIoU cityscapes ``` + Note: There is some inconsistency between cityscapes mIoU and our mIoU. The reason is that cityscapes average each class with class size by default. + We use the simple version without average for all datasets. 5. Test PSPNet on cityscapes test split with 4 GPUs, and generate the png files to be submit to the official evaluation server. @@ -151,8 +153,8 @@ Assume that you have already downloaded the checkpoints to the directory `checkp 4 --format-only --options "imgfile_prefix=./pspnet_test_results" ``` -You will get png files under `./pspnet_test_results` directory. -You may run `zip -r results.zip pspnet_test_results/` and submit the zip file to [evaluation server](https://www.cityscapes-dataset.com/submit/). + You will get png files under `./pspnet_test_results` directory. + You may run `zip -r results.zip pspnet_test_results/` and submit the zip file to [evaluation server](https://www.cityscapes-dataset.com/submit/). ### Image demo diff --git a/docs/tutorials/training_tricks.md b/docs/tutorials/training_tricks.md index 5ff4b18a7..85552166f 100644 --- a/docs/tutorials/training_tricks.md +++ b/docs/tutorials/training_tricks.md @@ -26,3 +26,19 @@ model=dict( sampler=dict(type='OHEMPixelSampler', thresh=0.7, min_kept=100000)) ) ``` In this way, only pixels with confidence score under 0.7 are used to train. And we keep at least 100000 pixels during training. + +## Class Balanced Loss +For dataset that is not balanced in classes distribution, you may change the loss weight of each class. +Here is an example for cityscapes dataset. +```python +_base_ = './pspnet_r50-d8_512x1024_40k_cityscapes.py' +model=dict( + decode_head=dict( + loss_decode=dict( + type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0, + # DeepLab used this class weight for cityscapes + class_weight=[0.8373, 0.9180, 0.8660, 1.0345, 1.0166, 0.9969, 0.9754, + 1.0489, 0.8786, 1.0023, 0.9539, 0.9843, 1.1116, 0.9037, + 1.0865, 1.0955, 1.0865, 1.1529, 1.0507]))) +``` +`class_weight` will be passed into `CrossEntropyLoss` as `weight` argument. Please refer to [PyTorch Doc ](https://pytorch.org/docs/stable/nn.html?highlight=crossentropy#torch.nn.CrossEntropyLoss) for details. diff --git a/tools/convert_datasets/voc_aug.py b/tools/convert_datasets/voc_aug.py index fd5400361..942746351 100644 --- a/tools/convert_datasets/voc_aug.py +++ b/tools/convert_datasets/voc_aug.py @@ -50,8 +50,12 @@ def main(): list(mmcv.scandir(in_dir, suffix='.mat')), nproc=nproc) - with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f: - full_aug_list = [line.strip() for line in f] + full_aug_list = [] + with open(osp.join(aug_path, 'dataset', 'train.txt')) as f: + full_aug_list += [line.strip() for line in f] + with open(osp.join(aug_path, 'dataset', 'val.txt')) as f: + full_aug_list += [line.strip() for line in f] + with open( osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', 'train.txt')) as f: