mmdeploy/docs/en/03-benchmark/quantization.md
Qingren fd21b98efa
[Docs] add the support information about MMPose Hourglass (#1118)
* [Docs] add the support information about MMPose Hourglass

* * add Hourglass support config

* update benchmark doc

* modify Hourglass AP under pytorch backend

* add regression test for Hourglass

* *update 03-benchmark/benchmark.md

* * modify mmpose.yml to ignore ncnn int8 test

* add trt_fp16 test results

* * add openvino test result

* * modify supported_models.md
2022-10-10 16:38:54 +08:00

2.9 KiB

Quantization test result

Currently mmdeploy support ncnn quantization

Quantize with ncnn

mmcls

model dataset fp32 top-1 (%) int8 top-1 (%)
ResNet-18 Cifar10 94.82 94.83
ResNeXt-32x4d-50 ImageNet-1k 77.90 78.20*
MobileNet V2 ImageNet-1k 71.86 71.43*
HRNet-W18* ImageNet-1k 76.75 76.25*

Note:

  • Because of the large amount of imagenet-1k data and ncnn has not released Vulkan int8 version, only part of the test set (4000/50000) is used.
  • The accuracy will vary after quantization, and it is normal for the classification model to increase by less than 1%.

OCR detection

model dataset fp32 hmean int8 hmean
PANet ICDAR2015 0.795 0.792 @thr=0.9
TextSnake CTW1500 0.817 0.818

Note: mmocr Uses 'shapely' to compute IoU, which results in a slight difference in accuracy

Pose detection

model dataset fp32 AP int8 AP
Hourglass COCO2017 0.717 0.713

Note: MMPose models are tested with flip_test explicitly set to False in model configs.