mmdeploy/docs/en/03-benchmark/quantization.md
LiuYi-Up 27ac0ab002
[Docs] add support for vipnas (#1164)
* add support for vipnas

* add some unit tests

Signed-off-by: LiuYi-Up <1150854440@qq.com>

* Update quantization.md

* resolve some conflicts in docs

Signed-off-by: LiuYi-Up <1150854440@qq.com>

* fix the mdformat

Signed-off-by: LiuYi-Up <1150854440@qq.com>

* fix the layer_norm.py & test_mmcv_cnn.py

Signed-off-by: LiuYi-Up <1150854440@qq.com>

Signed-off-by: LiuYi-Up <1150854440@qq.com>
2022-10-19 10:24:23 +08:00

5.0 KiB

Quantization test result

Currently mmdeploy support ncnn quantization

Quantize with ncnn

mmcls

model dataset fp32 top-1 (%) int8 top-1 (%)
ResNet-18 Cifar10 94.82 94.83
ResNeXt-32x4d-50 ImageNet-1k 77.90 78.20*
MobileNet V2 ImageNet-1k 71.86 71.43*
HRNet-W18* ImageNet-1k 76.75 76.25*

Note:

  • Because of the large amount of imagenet-1k data and ncnn has not released Vulkan int8 version, only part of the test set (4000/50000) is used.
  • The accuracy will vary after quantization, and it is normal for the classification model to increase by less than 1%.

OCR detection

model dataset fp32 hmean int8 hmean
PANet ICDAR2015 0.795 0.792 @thr=0.9
TextSnake CTW1500 0.817 0.818

Note: mmocr Uses 'shapely' to compute IoU, which results in a slight difference in accuracy

Pose detection

model dataset fp32 AP int8 AP
Hourglass COCO2017 0.717 0.713
S-ViPNAS-MobileNetV3 COCO2017 0.687 0.683
S-ViPNAS-Res50 COCO2017 0.701 0.696
S-ViPNAS-MobileNetV3 COCO Wholebody 0.459 0.445
S-ViPNAS-Res50 COCO Wholebody 0.484 0.476
S-ViPNAS-MobileNetV3_dark COCO Wholebody 0.499 0.481
S-ViPNAS-Res50_dark COCO Wholebody 0.520 0.511

Note: MMPose models are tested with flip_test explicitly set to False in model configs.

Super Resolution

model dataset fp32 PSNR/SSIM int8 PSNR/SSIM
EDSRx2 Set5 35.7733/0.9365 35.4266/0.9334
EDSRx4 Set5 30.2194/0.8498 29.9340/0.8409