"How to quantize the bias of a convolution in QAT (Quantization Aware Training) mode?

My question is similar to the discussions

From these two discussions, I roughly understand the reason why quantizing bias is not possible in eager mode. I also gained some insights into quantizing bias in FX. However, I would like to confirm: In the latest version of eager mode QAT quantization, is it still not possible to quantize bias? :blush:

Hi, we haven’t made changes to how we quantize bias in eager model. In the graph mode flows fx and pt2 export based - you can configure this yourself by annotating the graph correctly if your backend supports quantized bias operators.