Quantized Batch norm parameters not saved in state_dict

I found out that the scale and zero point parameters of the BatchNorm2d module after conversion with post-training quantization are not included the model.state_dict(). Therefore, when saving the state dict of the converted model to a file, these parameters are lost. This yields different results when doing inference just after post-training quantization, or whether first saving the state dict of the quantized model to a file and later on loading it again.

This does not happen when the batch norm module is fused with another module (like convolution), because then, because the scale and zero point is saved for the module if it is fused (e.g., with convolution module).

An alternative is to save the state dict of the model before running torch.quantization.convert(model). However, is there a specific reason why the scale and zero point of the quantized batch norm module is not included in the state dict of the model after conversion?

Reference: torch.nn.quantized.modules.batchnorm — PyTorch 1.10.0 documentation (here you see that scale and zero point are taken from the activation_post_process.calculate_qparams(), but when loading a new model without doing proper calibration, these values are not set correctly)

I think this might be a bug, thanks for reporting, opened an issue here: quantized batchnorm parameters/buffers not saved in state_dict · Issue #69808 · pytorch/pytorch · GitHub