* add icnet backbone * add icnet head * add icnet configs * nclass -> num_classes * Support ICNet * ICNet * ICNet * Add ICNeck * Add ICNeck * Add ICNeck * Add ICNeck * Adding unittest * Uploading models & logs * Uploading models & logs * add comment * smaller test_swin.py * try to delete test_swin.py * delete test_unet.py * delete test_unet.py * temp * smaller test_unet.py Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn> |
||
---|---|---|
.. | ||
README.md | ||
bisenetv1.yml | ||
bisenetv1_r18-d32_4x4_1024x1024_160k_cityscapes.py | ||
bisenetv1_r18-d32_in1k-pre_4x4_1024x1024_160k_cityscapes.py | ||
bisenetv1_r18-d32_in1k-pre_4x8_1024x1024_160k_cityscapes.py | ||
bisenetv1_r50-d32_4x4_1024x1024_160k_cityscapes.py | ||
bisenetv1_r50-d32_in1k-pre_4x4_1024x1024_160k_cityscapes.py |
README.md
BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation
Introduction
BiSeNetV1 (ECCV'2018)
@inproceedings{yu2018bisenet,
title={Bisenet: Bilateral segmentation network for real-time semantic segmentation},
author={Yu, Changqian and Wang, Jingbo and Peng, Chao and Gao, Changxin and Yu, Gang and Sang, Nong},
booktitle={Proceedings of the European conference on computer vision (ECCV)},
pages={325--341},
year={2018}
}
Results and models
Cityscapes
Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
---|---|---|---|---|---|---|---|---|---|
BiSeNetV1 (No Pretrain) | R-18-D32 | 1024x1024 | 160000 | 5.69 | 31.77 | 74.44 | 77.05 | config | model | log |
BiSeNetV1 | R-18-D32 | 1024x1024 | 160000 | 5.69 | 31.77 | 74.37 | 76.91 | config | model | log |
BiSeNetV1 (4x8) | R-18-D32 | 1024x1024 | 160000 | 11.17 | 31.77 | 75.16 | 77.24 | config | model | log |
BiSeNetV1 (No Pretrain) | R-50-D32 | 1024x1024 | 160000 | 15.39 | 7.71 | 76.92 | 78.87 | config | model | log |
BiSeNetV1 | R-50-D32 | 1024x1024 | 160000 | 15.39 | 7.71 | 77.68 | 79.57 | config | model | log |
Note:
4x8
: Using 4 GPUs with 8 samples per GPU in training.- Default setting is 4 GPUs with 4 samples per GPU in training.
No Pretrain
means the model is trained from scratch.