2020-07-07 20:52:19 +08:00
# Benchmark and Model Zoo
## Common settings
* We use distributed training with 4 GPUs by default.
* All pytorch-style pretrained backbones on ImageNet are train by ourselves, with the same procedure in the [paper ](https://arxiv.org/pdf/1812.01187.pdf ).
Our ResNet style backbone are based on ResNetV1c variant, where the 7x7 conv in the input stem is replaced with three 3x3 convs.
* For the consistency across different hardwares, we report the GPU memory as the maximum value of `torch.cuda.max_memory_allocated()` for all 4 GPUs with `torch.backends.cudnn.benchmark=False` .
Note that this value is usually less than what `nvidia-smi` shows.
* We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time.
Results are obtained with the script `tools/benchmark.py` which computes the average time on 200 images with `torch.backends.cudnn.benchmark=False` .
* There are two inference modes in this framework.
* `slide` mode: The `test_cfg` will be like `dict(mode='slide', crop_size=(769, 769), stride=(513, 513))` .
In this mode, multiple patches will be cropped from input image, passed into network individually.
The crop size and stride between patches are specified by `crop_size` and `stride` .
The overlapping area will be merged by average
* `whole` mode: The `test_cfg` will be like `dict(mode='whole')` .
In this mode, the whole imaged will be passed into network directly.
2020-07-10 16:55:47 +08:00
By default, we use `slide` inference for 769x769 trained model, `whole` inference for the rest.
2020-07-07 20:52:19 +08:00
* For input size of 8x+1 (e.g. 769), `align_corner=True` is adopted as a traditional practice.
Otherwise, for input size of 8x (e.g. 512, 1024), `align_corner=False` is adopted.
## Baselines
### FCN
2020-07-10 16:55:47 +08:00
Please refer to [FCN ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn ) for details.
2020-07-07 20:52:19 +08:00
### PSPNet
2020-07-10 16:55:47 +08:00
Please refer to [PSPNet ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/pspnet ) for details.
2020-07-07 20:52:19 +08:00
### DeepLabV3
2020-07-13 20:54:32 +08:00
Please refer to [DeepLabV3 ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3 ) for details.
2020-07-07 20:52:19 +08:00
### PSANet
2020-07-10 16:55:47 +08:00
Please refer to [PSANet ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet ) for details.
2020-07-07 20:52:19 +08:00
### DeepLabV3+
2020-07-13 20:54:32 +08:00
Please refer to [DeepLabV3+ ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/deeplabv3plus ) for details.
2020-07-07 20:52:19 +08:00
### UPerNet
2020-07-10 16:55:47 +08:00
Please refer to [UPerNet ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/upernet ) for details.
2020-07-07 20:52:19 +08:00
### NonLocal Net
2020-07-13 20:54:32 +08:00
Please refer to [NonLocal Net ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/nlnet ) for details.
2020-07-10 16:55:47 +08:00
### EncNet
2020-07-13 20:54:32 +08:00
Please refer to [NonLocal Net ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/encnet ) for details.
2020-07-07 20:52:19 +08:00
### CCNet
2020-07-10 16:55:47 +08:00
Please refer to [CCNet ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ccnet ) for details.
2020-07-07 20:52:19 +08:00
### DANet
2020-07-10 16:55:47 +08:00
Please refer to [DANet ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/danet ) for details.
2020-07-07 20:52:19 +08:00
### HRNet
2020-07-10 16:55:47 +08:00
Please refer to [HRNet ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/hrnet ) for details.
2020-07-07 20:52:19 +08:00
### GCNet
2020-07-10 16:55:47 +08:00
Please refer to [GCNet ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/gcnet ) for details.
2020-07-07 20:52:19 +08:00
### ANN
2020-07-10 16:55:47 +08:00
Please refer to [ANN ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ann ) for details.
2020-07-07 20:52:19 +08:00
### OCRNet
2020-07-10 16:55:47 +08:00
Please refer to [OCRNet ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/ocrnet ) for details.
2020-07-07 20:52:19 +08:00
2020-08-18 23:33:05 +08:00
### Fast-SCNN
Please refer to [Fast-SCNN ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fastscnn ) for details.
2020-08-17 00:54:01 +08:00
### ResNeSt
Please refer to [ResNeSt ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/resnest ) for details.
2020-07-20 15:17:18 +08:00
### Mixed Precision (FP16) Training
Please refer [Mixed Precision (FP16) Training ](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fp16/README.md ) for details.
2020-07-07 20:52:19 +08:00
## Speed benchmark
### Hardware
- 8 NVIDIA Tesla V100 (32G) GPUs
- Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
### Software environment
- Python 3.7
- PyTorch 1.5
- CUDA 10.1
- CUDNN 7.6.03
- NCCL 2.4.08
### Training speed
For fair comparison, we benchmark all implementations with ResNet-101V1c.
The input size is fixed to 1024x512 with batch size 2.
The training speed is reported as followed, in terms of second per iter (s/iter). The lower, the better.
| Implementation | PSPNet (s/iter) | DeepLabV3+ (s/iter) |
|----------------|-----------------|---------------------|
| [MMSegmentation ](https://github.com/open-mmlab/mmsegmentation ) | **0.83** | **0.85** |
| [SegmenTron ](https://github.com/LikeLy-Journey/SegmenTron ) | 0.84 | 0.85 |
| [CASILVision ](https://github.com/CSAILVision/semantic-segmentation-pytorch ) | 1.15 | N/A |
| [vedaseg ](https://github.com/Media-Smart/vedaseg ) | 0.95 | 1.25 |
Note: The output stride of DeepLabV3+ is 8.