update readme

pull/1/head
Zhiding Yu 2021-06-12 18:16:33 -07:00
parent f6af29ff18
commit 82ea508d6b
2 changed files with 20 additions and 13 deletions

View File

@ -1,11 +1,8 @@
[![NVIDIA Source Code License](https://img.shields.io/badge/license-NSCL-blue.svg)](https://github.com/NVlabs/SegFormer/blob/master/LICENSE)
![Python 3.7](https://img.shields.io/badge/python-3.7-green.svg)
# SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
This repository contains PyTorch evaluation code, training code and pretrained models for [SegFormer](https://arxiv.org/abs/2105.15203).
SegFormer is a simple, efficient and powerful semantic segmentation method, as shown in Figure 1.
We use [MMSegmentation v0.13.0](https://github.com/open-mmlab/mmsegmentation/tree/v0.13.0) as the codebase.
<!-- ![image](resources/image.png) -->
<div align="center">
<img src="./resources/image.png" height="400">
@ -14,9 +11,19 @@ We use [MMSegmentation v0.13.0](https://github.com/open-mmlab/mmsegmentation/tre
Figure 1: Performance of SegFormer-B0 to SegFormer-B5.
</p>
### [Project page](https://github.com/NVlabs/SegFormer) | [Paper](https://arxiv.org/abs/2105.15203) | [Demo (Youtube)](https://www.youtube.com/watch?v=J0MoRQzZe8U) | [Demo (Bilibili)](https://www.bilibili.com/video/BV1MV41147Ko/)
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers.<br>
[Enze Xie](https://xieenze.github.io/), [Wenhai Wang](https://whai362.github.io/), [Zhiding Yu](https://chrisding.github.io/), [Anima Anandkumar](https://tensorlab.cms.caltech.edu/users/anima/), [Jose M. Alvarez](https://rsu.data61.csiro.au/people/jalvarez/), and [Ping Luo](http://luoping.me/).<br>
Technical Report 2021.
## Install
This repository contains the PyTorch training/evaluation code and the pretrained models for [SegFormer](https://arxiv.org/abs/2105.15203).
SegFormer is a simple, efficient and powerful semantic segmentation method, as shown in Figure 1.
We use [MMSegmentation v0.13.0](https://github.com/open-mmlab/mmsegmentation/tree/v0.13.0) as the codebase.
## Installation
For install and data preparation, please refer to the guidelines in [MMSegmentation v0.13.0](https://github.com/open-mmlab/mmsegmentation/tree/v0.13.0).
@ -30,13 +37,13 @@ Download [trained weights](https://drive.google.com/drive/folders/1GAku0G0iR9DsB
Example: evaluate ```SegFormer-B1``` on ```ADE20K```:
```
# single-gpu testing
# Single-gpu testing
python tools/test.py local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py /path/to/checkpoint_file
# multi-gpu testing
# Multi-gpu testing
./tools/dist_test.sh local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM>
# multi-gpu, multi-scale testing
# Multi-gpu, multi-scale testing
tools/dist_test.sh local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py /path/to/checkpoint_file <GPU_NUM> --aug-test
```
@ -47,10 +54,10 @@ Download [weights](https://drive.google.com/drive/folders/1b7bwrInTW4VLEm27YawHO
Example: train ```SegFormer-B1``` on ```ADE20K```:
```
# single-gpu training
# Single-gpu training
python tools/train.py local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py
# multi-gpu training
# Multi-gpu training
./tools/dist_train.sh local_configs/segformer/B1/segformer.b1.512x512.ade.160k.py <GPU_NUM>
```
@ -60,7 +67,7 @@ evaluation purposes only. For business inquiries, please contact
[researchinquiries@nvidia.com](mailto:researchinquiries@nvidia.com).
## Citing SegFormer
## Citation
```
@article{xie2021segformer,
title={SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers},

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

After

Width:  |  Height:  |  Size: 65 KiB