OWOD/projects/Panoptic-DeepLab
Joseph 73d7a85f88 Detectron2 2020-09-29 05:47:18 +05:30
..
configs/Cityscapes-PanopticSegmentation Detectron2 2020-09-29 05:47:18 +05:30
panoptic_deeplab Detectron2 2020-09-29 05:47:18 +05:30
README.md Detectron2 2020-09-29 05:47:18 +05:30
train_net.py Detectron2 2020-09-29 05:47:18 +05:30

README.md

Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation

Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen

[arXiv] [BibTeX] [Reference implementation]


Installation

Install Detectron2 following the instructions.

Training

To train a model with 8 GPUs run:

cd /path/to/detectron2/projects/Panoptic-DeepLab
python train_net.py --config-file config/Cityscapes-PanopticSegmentation/panoptic_deeplab_R_52_os16_mg124_poly_90k_bs32_crop_512_1024.yaml --num-gpus 8

Evaluation

Model evaluation can be done similarly:

cd /path/to/detectron2/projects/Panoptic-DeepLab
python train_net.py --config-file config/Cityscapes-PanopticSegmentation/panoptic_deeplab_R_52_os16_mg124_poly_90k_bs32_crop_512_1024.yaml --eval-only MODEL.WEIGHTS /path/to/model_checkpoint

Cityscapes Panoptic Segmentation

Cityscapes models are trained with ImageNet pretraining.

Method Backbone Output
resolution
PQ SQ RQ mIoU AP Memory (M) model id download
Panoptic-DeepLab R50-DC5 1024×2048 58.6 80.9 71.2 75.9 29.8 8668 - model | metrics
Panoptic-DeepLab R52-DC5 1024×2048 60.3 81.5 72.9 78.2 33.2 9682 model | metrics

Note:

  • R52: a ResNet-50 with its first 7x7 convolution replaced by 3 3x3 convolutions. This modification has been used in most semantic segmentation papers. We pre-train this backbone on ImageNet using the default recipe of pytorch examples.
  • DC5 means using dilated convolution in res5.
  • We use a smaller training crop size (512x1024) than the original paper (1025x2049), we find using larger crop size (1024x2048) could further improve PQ by 1.5% but also degrades AP by 3%.

Citing Panoptic-DeepLab

If you use Panoptic-DeepLab, please use the following BibTeX entry.

  • CVPR 2020 paper:
@inproceedings{cheng2020panoptic,
  title={Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation},
  author={Cheng, Bowen and Collins, Maxwell D and Zhu, Yukun and Liu, Ting and Huang, Thomas S and Adam, Hartwig and Chen, Liang-Chieh},
  booktitle={CVPR},
  year={2020}
}
  • ICCV 2019 COCO-Mapillary workshp challenge report:
@inproceedings{cheng2019panoptic,
  title={Panoptic-DeepLab},
  author={Cheng, Bowen and Collins, Maxwell D and Zhu, Yukun and Liu, Ting and Huang, Thomas S and Adam, Hartwig and Chen, Liang-Chieh},
  booktitle={ICCV COCO + Mapillary Joint Recognition Challenge Workshop},
  year={2019}
}