I am trying to compare the post-training quantization performance of a normally trained and QAT-trained network (ResNet-18 and 50) on ILSVRC-2012, using the torchvision codebase: https://github.com/pytorch/vision/tree/main/references/classification. However, I cannot find any references on QAT FP or INT8 (standard & QAT) expected accuracies and am also getting the following surprising results:
train_quantization.pytraining poorly: I’m getting below 40% top-1 accuracy even after 45 epochs for both ResNet-18 and 50
train.pystandard-trained models do not degrade in accuracy post-quant: I trained an FP32 ResNet-18 to
Acc@1 67.610 Acc@5 88.102and the INT8 post-quant accuracy is
Acc@1 67.136 Acc@5 87.768.
This leads to the following questions about ILSVRC-2012 training (for R18 & 50):
- What are their expected QAT FP32 accuracies after training with
- What are their expected QAT INT8 accuracies after training with
- What are their expected INT8 accuracies after training with the standard
train.py? Do they correspond to the accuracies listed here https://pytorch.org/vision/main/models.html#table-of-all-available-quantized-classification-weights?
- Does anyone have references for the above Q1-3 for PreActResNet-18 & CIFAR-10 training?
It seems like there are no references for QAT + ResNet on ILSVRC-2012; there are only QAT from scratch references for MobileNetv2 & 3 (https://github.com/pytorch/vision/tree/main/references/classification#qat-mobilenetv2). Do ResNets simply not train well with QAT? Thanks a lot!