How pytorch simulates bias during quantization aware training

We find modeling bias in qat is not very important since it doesn’t affect accuracy too much. one workaround you can do is to remove bias from Conv and add the bias explicitly outside of conv, so that adding bias can be modeled with add.

2 Likes