Quantize conv layer with bias

I have a resnet-based network. Trying to convert it to INT8 but met accuracy loss using static quant.
Tried QAT but met with assert error “Only support fusing Conv2d that does not have bias”

Look into the model structure and found all the conv layers have bias:
nn.Conv2d(in_planes, out_planes, kernel_size, stride=stride, bias=True)

why should conv without bias? is it possible to use fake quant to train it?

can I only modify the fusing code in qat.conv_fuse.ConvBn2d to do fuse conv bias ?

When you are fusing a conv with a batch norm, there is no separate bias term in the conv. This is because, batch norm already has a trainable bias parameter which serves the same purpose. The reference resnet implementation at: https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py#L24 does not have bias as part of the conv.