mmdeploy/docs/en/03-benchmark/quantization.md
OldDreamInWind 161fb01d73
[Feature]add edsr result && super-resolution ncnn-int8 config (#1111)
* add edsr result && ncnn-int8 config

* fix lint error

* fix lint error

* fix lint error && update benchmark.md

* add EDSRx2 pytorch result

* update edsrx4 result in benchmark
2022-10-18 10:04:56 +08:00

3.5 KiB

Quantization test result

Currently mmdeploy support ncnn quantization

Quantize with ncnn

mmcls

model dataset fp32 top-1 (%) int8 top-1 (%)
ResNet-18 Cifar10 94.82 94.83
ResNeXt-32x4d-50 ImageNet-1k 77.90 78.20*
MobileNet V2 ImageNet-1k 71.86 71.43*
HRNet-W18* ImageNet-1k 76.75 76.25*

Note:

  • Because of the large amount of imagenet-1k data and ncnn has not released Vulkan int8 version, only part of the test set (4000/50000) is used.
  • The accuracy will vary after quantization, and it is normal for the classification model to increase by less than 1%.

OCR detection

model dataset fp32 hmean int8 hmean
PANet ICDAR2015 0.795 0.792 @thr=0.9
TextSnake CTW1500 0.817 0.818

Note: mmocr Uses 'shapely' to compute IoU, which results in a slight difference in accuracy

Pose detection

model dataset fp32 AP int8 AP
Hourglass COCO2017 0.717 0.713

Note: MMPose models are tested with flip_test explicitly set to False in model configs.

Super Resolution

model dataset fp32 PSNR/SSIM int8 PSNR/SSIM
EDSRx2 Set5 35.7733/0.9365 35.4266/0.9334
EDSRx4 Set5 30.2194/0.8498 29.9340/0.8409