I quantized my model with post-training quantization in PyTorch. Then I want to compare the gradient of input ‘x’ before and after quantization. I first set requires_grad=True
. After self.quant(x)
, x.requires_grad
became False. When I set it to True, it reported RuntimeError: only Tensors of floating point dtype can require gradients
. How can I get the gradient of quantized tensor?
x = self.preprocess(x)
x = self.quant(x)
# x.requires_grad_(True)
x = self.features(x)