SegFormer/README.md

117 lines
4.5 KiB
Markdown
Raw Normal View History

2021-06-13 09:16:33 +08:00
[![NVIDIA Source Code License](https://img.shields.io/badge/license-NSCL-blue.svg)](https://github.com/NVlabs/SegFormer/blob/master/LICENSE)
2021-06-13 12:55:00 +08:00
![Python 3.8](https://img.shields.io/badge/python-3.8-green.svg)
2021-06-13 00:56:27 +08:00
2021-06-13 09:16:33 +08:00
# SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
2021-06-13 00:32:37 +08:00
2021-06-13 01:03:57 +08:00
<!-- ![image](resources/image.png) -->
<div align="center">
2021-06-13 01:05:25 +08:00
<img src="./resources/image.png" height="400">
2021-06-13 01:03:57 +08:00
</div>
<p align="center">
Figure 1: Performance of SegFormer-B0 to SegFormer-B5.
</p>
2021-06-13 09:16:33 +08:00
### [Project page](https://github.com/NVlabs/SegFormer) | [Paper](https://arxiv.org/abs/2105.15203) | [Demo (Youtube)](https://www.youtube.com/watch?v=J0MoRQzZe8U) | [Demo (Bilibili)](https://www.bilibili.com/video/BV1MV41147Ko/)
2021-06-13 01:03:57 +08:00
2021-06-13 09:16:33 +08:00
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers.<br>
2021-06-13 09:23:06 +08:00
[Enze Xie](https://xieenze.github.io/), [Wenhai Wang](https://whai362.github.io/), [Zhiding Yu](https://chrisding.github.io/), [Anima Anandkumar](http://tensorlab.cms.caltech.edu/users/anima/), [Jose M. Alvarez](https://rsu.data61.csiro.au/people/jalvarez/), and [Ping Luo](http://luoping.me/).<br>
2021-11-24 19:40:22 +08:00
NeurIPS 2021.
2021-06-13 09:16:33 +08:00
2021-06-24 20:09:07 +08:00
This repository contains the official Pytorch implementation of training & evaluation code and the pretrained models for [SegFormer](https://arxiv.org/abs/2105.15203).
2021-06-13 09:16:33 +08:00
SegFormer is a simple, efficient and powerful semantic segmentation method, as shown in Figure 1.
We use [MMSegmentation v0.13.0](https://github.com/open-mmlab/mmsegmentation/tree/v0.13.0) as the codebase.
2021-06-13 00:32:37 +08:00
2021-08-13 13:38:09 +08:00
🔥🔥 SegFormer is on [MMSegmentation](https://github.com/open-mmlab/mmsegmentation/tree/master/configs/segformer). 🔥🔥
2021-08-13 13:37:53 +08:00
2021-06-13 09:16:33 +08:00
## Installation
2021-06-13 00:56:27 +08:00
For install and data preparation, please refer to the guidelines in [MMSegmentation v0.13.0](https://github.com/open-mmlab/mmsegmentation/tree/v0.13.0).
2021-06-13 00:32:37 +08:00
2021-06-13 01:20:49 +08:00
Other requirements:
```pip install timm==0.3.2```
2021-06-13 00:32:37 +08:00
2021-06-21 00:18:11 +08:00
An example (works for me): ```CUDA 10.1``` and ```pytorch 1.7.1```
```
pip install torchvision==0.8.2
pip install timm==0.3.2
pip install mmcv-full==1.2.7
pip install opencv-python==4.5.1.48
2021-06-21 00:18:43 +08:00
cd SegFormer && pip install -e . --user
2021-06-21 00:18:11 +08:00
```
2021-06-13 00:56:27 +08:00
## Evaluation
2021-06-13 00:32:37 +08:00
2022-08-18 08:05:41 +08:00
Download `trained weights`. [google drive](https://drive.google.com/drive/folders/1GAku0G0iR9DsBxCbfENWMJ27c5lYUeQA?usp=sharing) |
[onedrive](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/xieenze_connect_hku_hk/Ept_oetyUGFCsZTKiL_90kUBy5jmPV65O5rJInsnRCDWJQ?e=CvGohw)
2021-06-13 00:32:37 +08:00
2021-06-13 00:56:27 +08:00
Example: evaluate ```SegFormer-B1``` on ```ADE20K```:
2021-06-13 00:32:37 +08:00
2021-06-13 00:56:27 +08:00
```
2021-06-13 09:16:33 +08:00
# Single-gpu testing
2021-06-13 00:56:27 +08:00
python tools/test.py local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py /path/to/checkpoint_file
2021-06-13 09:16:33 +08:00
# Multi-gpu testing
2021-06-13 00:56:27 +08:00
./tools/dist_test.sh local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM>
2021-06-13 09:16:33 +08:00
# Multi-gpu, multi-scale testing
2021-06-13 00:56:27 +08:00
tools/dist_test.sh local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM> --aug-test
```
## Training
2021-06-13 00:32:37 +08:00
2022-08-18 07:53:02 +08:00
Download `weights`
(
[google drive](https://drive.google.com/drive/folders/1b7bwrInTW4VLEm27YawHOAMSMikga2Ia?usp=sharing) |
2022-08-18 08:05:41 +08:00
[onedrive](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/xieenze_connect_hku_hk/EvOn3l1WyM5JpnMQFSEO5b8B7vrHw9kDaJGII-3N9KNhrg?e=cpydzZ)
2022-08-18 07:53:02 +08:00
)
pretrained on ImageNet-1K, and put them in a folder ```pretrained/```.
2021-06-13 00:32:37 +08:00
2021-06-13 00:56:27 +08:00
Example: train ```SegFormer-B1``` on ```ADE20K```:
2021-06-13 00:32:37 +08:00
```
2021-06-13 09:16:33 +08:00
# Single-gpu training
2021-06-13 00:56:27 +08:00
python tools/train.py local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py
2021-06-13 09:16:33 +08:00
# Multi-gpu training
2021-06-13 00:56:27 +08:00
./tools/dist_train.sh local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py <GPU_NUM>
2021-06-13 00:32:37 +08:00
```
2021-06-24 19:58:03 +08:00
## Visualize
Here is a demo script to test a single image. More details refer to [MMSegmentation's Doc](https://mmsegmentation.readthedocs.io/en/latest/get_started.html).
```shell
python demo/image_demo.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} [--device ${DEVICE_NAME}] [--palette-thr ${PALETTE}]
```
2021-06-24 20:11:03 +08:00
Example: visualize ```SegFormer-B1``` on ```CityScapes```:
2021-06-24 19:58:03 +08:00
```shell
python demo/image_demo.py demo/demo.png local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py \
2021-06-24 19:58:03 +08:00
/path/to/checkpoint_file --device cuda:0 --palette cityscapes
```
2021-06-13 00:56:27 +08:00
## License
Please check the LICENSE file. SegFormer may be used non-commercially, meaning for research or
evaluation purposes only. For business inquiries, please contact
[researchinquiries@nvidia.com](mailto:researchinquiries@nvidia.com).
2021-06-13 09:16:33 +08:00
## Citation
2021-06-13 00:56:27 +08:00
```
@article{xie2021segformer,
title={SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers},
author={Xie, Enze and Wang, Wenhai and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M and Luo, Ping},
journal={arXiv preprint arXiv:2105.15203},
year={2021}
}
```