Biases are all zeros for QAT

Hi, I have trained a YOLOv5 QAT model and exported it to ONNX. I noticed in ONNX that the biases are all set to 0. Is this supposed to be the case for QAT modules? Is this due to fuse_modules that causes the biases to be 0 in ONNX?

Hi @MrOCW, afaik there is no supported way to train models using pytorch quantization and then export the quantized models to ONNX. We currently do not plan to add support for ONNX export of pytorch quantized models. If this is important for your use case, feel free to submit a feature request or PR and our team can take a look.

Hi @supriyar, I am exporting the pre-quantized model with the fake quants and their q params, not the quantized model.

I see, if this happens for conv layers that have fusion with batch_norm then the possible reason is here pytorch/conv_fused.py at master · pytorch/pytorch · GitHub
where we set the biases to zero and add the original as part of the batch_norm operator.