What is the correct way to qat a conv layer with weight norm

Hi all,

It seems that if I prepare_qat on a weight_norm(conv) layer, the prehook for weight norm is disabled/removed, only normal quantized weight remains (due to module swapping?). What is the right way to carry out quantization in this case? do I need to create a custom quantization module for this.

Thanks,

Xun.

yeah I think this is due to module swapping, right I think you need to create a custom QAT module and quantized module for this module, and connect this with the quantization flow.

If you are using fx graph mode quantization, you can use the BackendConfig api (BackendConfig — PyTorch master documentation), we will publish a tutorial later for this cc @andrewor.

hi!
Have you solved it? I don’t know how to solve it yet.