mmdeploy/docs/en/03-benchmark/quantization.md
RunningLeon a886f694c8
Update docs (#2114)
* update docs from 1.x to main

* fix dead links

* fix tag_name

* Revert "fix tag_name"

This reverts commit fcf0c5841bf03a95fc1ddd109547908ef8111008.

* fix readthedocs for zh_cn
2023-05-31 15:18:32 +08:00

2.9 KiB

Quantization test result

Currently mmdeploy support ncnn quantization

Quantize with ncnn

mmpretrain

model dataset fp32 top-1 (%) int8 top-1 (%)
ResNet-18 Cifar10 94.82 94.83
ResNeXt-32x4d-50 ImageNet-1k 77.90 78.20*
MobileNet V2 ImageNet-1k 71.86 71.43*
HRNet-W18* ImageNet-1k 76.75 76.25*

Note:

  • Because of the large amount of imagenet-1k data and ncnn has not released Vulkan int8 version, only part of the test set (4000/50000) is used.
  • The accuracy will vary after quantization, and it is normal for the classification model to increase by less than 1%.

OCR detection

model dataset fp32 hmean int8 hmean
PANet ICDAR2015 0.795 0.792 @thr=0.9
TextSnake CTW1500 0.817 0.818

Note: mmocr Uses 'shapely' to compute IoU, which results in a slight difference in accuracy

Pose detection

model dataset fp32 AP int8 AP
Hourglass COCO2017 0.717 0.713

Note: MMPose models are tested with flip_test explicitly set to False in model configs.