|
||
---|---|---|
configs | ||
demo | ||
docker | ||
docs | ||
local_configs | ||
mmseg | ||
requirements | ||
resources | ||
tests | ||
tools | ||
.gitignore | ||
LICENSE | ||
README.md | ||
pytest.ini | ||
requirements.txt | ||
setup.cfg | ||
setup.py |
README.md
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
This repository contains PyTorch evaluation code, training code and pretrained models for SegFormer.
SegFormer is a simple, efficient and powerful semantic segmentation method, as shown in Figure 1.
We use MMSegmentation v0.13.0 as the codebase.

Figure 1: Performance of SegFormer-B0 to SegFormer-B5.
Install
For install and data preparation, please refer to the guidelines in MMSegmentation v0.13.0.
Other requirements:
pip install timm==0.3.2
Evaluation
Download trained weights.
Example: evaluate SegFormer-B1
on ADE20K
:
# single-gpu testing
python tools/test.py local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py /path/to/checkpoint_file
# multi-gpu testing
./tools/dist_test.sh local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM>
# multi-gpu, multi-scale testing
tools/dist_test.sh local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM> --aug-test
Training
Download weights pretrained on ImageNet-1K, and put them in a folder pretrained/
.
Example: train SegFormer-B1
on ADE20K
:
# single-gpu training
python tools/train.py local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py
# multi-gpu training
./tools/dist_train.sh local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py <GPU_NUM>
License
Please check the LICENSE file. SegFormer may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact researchinquiries@nvidia.com.
Citing SegFormer
@article{xie2021segformer,
title={SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers},
author={Xie, Enze and Wang, Wenhai and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M and Luo, Ping},
journal={arXiv preprint arXiv:2105.15203},
year={2021}
}