Shortcuts

Quantization test result

Currently mmdeploy support ncnn quantization

Quantize with ncnn

mmpretrain

model dataset fp32 top-1 (%) int8 top-1 (%)
ResNet-18 Cifar10 94.82 94.83
ResNeXt-32x4d-50 ImageNet-1k 77.90 78.20*
MobileNet V2 ImageNet-1k 71.86 71.43*
HRNet-W18* ImageNet-1k 76.75 76.25*

Note:

  • Because of the large amount of imagenet-1k data and ncnn has not released Vulkan int8 version, only part of the test set (4000/50000) is used.

  • The accuracy will vary after quantization, and it is normal for the classification model to increase by less than 1%.

OCR detection

model dataset fp32 hmean int8 hmean
PANet ICDAR2015 0.795 0.792 @thr=0.9
TextSnake CTW1500 0.817 0.818

Note: mmocr Uses ‘shapely’ to compute IoU, which results in a slight difference in accuracy

Pose detection

model dataset fp32 AP int8 AP
Hourglass COCO2017 0.717 0.713

Note: MMPose models are tested with flip_test explicitly set to False in model configs.

Read the Docs v: stable
Versions
latest
stable
1.x
v1.3.1
v1.3.0
v1.2.0
v1.1.0
v1.0.0
0.x
v0.14.0
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.