mmdeploy/docs/en/03-benchmark/quantization.md
tpoisonooo 127125f641
docs(project): sync en and zh docs (#842)
* docs(en): update file structure

* docs(zh_cn): update

* docs(structure): update

* docs(snpe): update

* docs(README): update

* fix(CI): update

* fix(CI): index.rst error

* fix(docs): update

* fix(docs): remove mermaid

* fix(docs): remove useless

* fix(docs): update link

* docs(en): update

* docs(en): update

* docs(zh_cn): remove \[

* docs(zh_cn): format

* docs(en): remove blank

* fix(CI): doc link error

* docs(project): remove "./" prefix

* docs(zh_cn): fix mdformat

* docs(en): update title

* fix(CI): update docs
2022-08-15 10:18:17 +08:00

2.0 KiB

Quantization test result

Currently mmdeploy support ncnn quantization

Quantize with ncnn

mmcls

model dataset fp32 top-1 (%) int8 top-1 (%)
ResNet-18 Cifar10 94.82 94.83
ResNeXt-32x4d-50 ImageNet-1k 77.90 78.20*
MobileNet V2 ImageNet-1k 71.86 71.43*
HRNet-W18* ImageNet-1k 76.75 76.25*

Note:

  • Because of the large amount of imagenet-1k data and ncnn has not released Vulkan int8 version, only part of the test set (4000/50000) is used.
  • The accuracy will vary after quantization, and it is normal for the classification model to increase by less than 1%.

OCR detection

model dataset fp32 hmean int8 hmean
PANet ICDAR2015 0.795 0.792 @thr=0.9

Note: mmocr Uses 'shapely' to compute IoU, which results in a slight difference in accuracy