Currently mmdeploy support ncnn quantization
model | dataset | fp32 top-1 (%) | int8 top-1 (%) |
---|---|---|---|
ResNet-18 | Cifar10 | 94.82 | 94.83 |
ResNeXt-32x4d-50 | ImageNet-1k | 77.90 | 78.20* |
MobileNet V2 | ImageNet-1k | 71.86 | 71.43* |
HRNet-W18* | ImageNet-1k | 76.75 | 76.25* |
Note:
- Because of the large amount of imagenet-1k data and ncnn has not released Vulkan int8 version, only part of the test set (4000/50000) is used.
- The accuracy will vary after quantization, and it is normal for the classification model to increase by less than 1%.
model | dataset | fp32 hmean | int8 hmean |
---|---|---|---|
PANet | ICDAR2015 | 0.795 | 0.792 @thr=0.9 |
TextSnake | CTW1500 | 0.817 | 0.818 |
Note: mmocr Uses 'shapely' to compute IoU, which results in a slight difference in accuracy
model | dataset | fp32 AP | int8 AP |
---|---|---|---|
Hourglass | COCO2017 | 0.717 | 0.713 |
Note: MMPose models are tested with flip_test
explicitly set to False
in model configs.