Torch.quantization.convert does not work if my model has QConfig activation is set to torch.nn.Identity

My goal is to quantize the weight of the model and not to quantize the activation function in QAT. I went through the Pytorch GitHub. However, it gives me an error
AttributeError: ‘Identity’ object has no attribute ‘calculate_qparams’

you can’t just use identity because the quantization flow uses information from the qconfig to determine a number of things and makes assumptions about the types of objects in the qconfig, you should use the actual weight only qconfig instead

float_qparams_weight_only_qconfig