It seems that if I prepare_qat on a weight_norm(conv) layer, the prehook for weight norm is disabled/removed, only normal quantized weight remains (due to module swapping?). What is the right way to carry out quantization in this case? do I need to create a custom quantization module for this.
yeah I think this is due to module swapping, right I think you need to create a custom QAT module and quantized module for this module, and connect this with the quantization flow.