mirror of
https://github.com/open-mmlab/mmsegmentation.git
synced 2025-06-03 22:03:48 +08:00
Fixed voc aug convert (#19)
* Fixed voc aug convert * update getting_started.md * add class balanced doc
This commit is contained in:
parent
1c3f547659
commit
1af2ad6a9f
5
.github/ISSUE_TEMPLATE/config.yml
vendored
5
.github/ISSUE_TEMPLATE/config.yml
vendored
@ -1 +1,6 @@
|
||||
blank_issues_enabled: false
|
||||
|
||||
contact_links:
|
||||
- name: MMSegmentation Documentation
|
||||
url: https://mmsegmentation.readthedocs.io
|
||||
about: Check the docs and FAQ to see if you question is already anwsered.
|
||||
|
@ -363,3 +363,13 @@ data = dict(
|
||||
test=dict(pipeline=test_pipeline))
|
||||
```
|
||||
We first define the new `train_pipeline`/`test_pipeline` and pass them into `data`.
|
||||
|
||||
Similarly, if we would like to switch from `SyncBN` to `BN` or `MMSyncBN`, we need to substitute every `norm_cfg` in the config.
|
||||
```python
|
||||
_base_ = '../pspnet/psp_r50_512x1024_40ki_cityscpaes.py'
|
||||
norm_cfg = dict(type='BN', requires_grad=True)
|
||||
model = dict(
|
||||
backbone=dict(norm_cfg=norm_cfg),
|
||||
decode_head=dict(norm_cfg=norm_cfg),
|
||||
auxiliary_head=dict(norm_cfg=norm_cfg))
|
||||
```
|
||||
|
@ -132,6 +132,8 @@ Assume that you have already downloaded the checkpoints to the directory `checkp
|
||||
checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \
|
||||
4 --out results.pkl --eval mIoU cityscapes
|
||||
```
|
||||
Note: There is some inconsistency between cityscapes mIoU and our mIoU. The reason is that cityscapes average each class with class size by default.
|
||||
We use the simple version without average for all datasets.
|
||||
|
||||
5. Test PSPNet on cityscapes test split with 4 GPUs, and generate the png files to be submit to the official evaluation server.
|
||||
|
||||
@ -151,8 +153,8 @@ Assume that you have already downloaded the checkpoints to the directory `checkp
|
||||
4 --format-only --options "imgfile_prefix=./pspnet_test_results"
|
||||
```
|
||||
|
||||
You will get png files under `./pspnet_test_results` directory.
|
||||
You may run `zip -r results.zip pspnet_test_results/` and submit the zip file to [evaluation server](https://www.cityscapes-dataset.com/submit/).
|
||||
You will get png files under `./pspnet_test_results` directory.
|
||||
You may run `zip -r results.zip pspnet_test_results/` and submit the zip file to [evaluation server](https://www.cityscapes-dataset.com/submit/).
|
||||
|
||||
|
||||
### Image demo
|
||||
|
@ -26,3 +26,19 @@ model=dict(
|
||||
sampler=dict(type='OHEMPixelSampler', thresh=0.7, min_kept=100000)) )
|
||||
```
|
||||
In this way, only pixels with confidence score under 0.7 are used to train. And we keep at least 100000 pixels during training.
|
||||
|
||||
## Class Balanced Loss
|
||||
For dataset that is not balanced in classes distribution, you may change the loss weight of each class.
|
||||
Here is an example for cityscapes dataset.
|
||||
```python
|
||||
_base_ = './pspnet_r50-d8_512x1024_40k_cityscapes.py'
|
||||
model=dict(
|
||||
decode_head=dict(
|
||||
loss_decode=dict(
|
||||
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0,
|
||||
# DeepLab used this class weight for cityscapes
|
||||
class_weight=[0.8373, 0.9180, 0.8660, 1.0345, 1.0166, 0.9969, 0.9754,
|
||||
1.0489, 0.8786, 1.0023, 0.9539, 0.9843, 1.1116, 0.9037,
|
||||
1.0865, 1.0955, 1.0865, 1.1529, 1.0507])))
|
||||
```
|
||||
`class_weight` will be passed into `CrossEntropyLoss` as `weight` argument. Please refer to [PyTorch Doc ](https://pytorch.org/docs/stable/nn.html?highlight=crossentropy#torch.nn.CrossEntropyLoss) for details.
|
||||
|
@ -50,8 +50,12 @@ def main():
|
||||
list(mmcv.scandir(in_dir, suffix='.mat')),
|
||||
nproc=nproc)
|
||||
|
||||
with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f:
|
||||
full_aug_list = [line.strip() for line in f]
|
||||
full_aug_list = []
|
||||
with open(osp.join(aug_path, 'dataset', 'train.txt')) as f:
|
||||
full_aug_list += [line.strip() for line in f]
|
||||
with open(osp.join(aug_path, 'dataset', 'val.txt')) as f:
|
||||
full_aug_list += [line.strip() for line in f]
|
||||
|
||||
with open(
|
||||
osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',
|
||||
'train.txt')) as f:
|
||||
|
Loading…
x
Reference in New Issue
Block a user