I tried to use torch.autograd.grad() to calculate gradients for a quantized model, just as what we usually do on full precision models:
for idx, (inputs, targets) in enumerate(data_loader): with torch.enable_grad(): inputs.requires_grad = True outputs = quantized_model(inputs) loss = criterion(outputs, targets) grads = torch.autograd.grad(loss, inputs)
But I got a RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Does models quantized with PyTorch Quantization currently do not support backpropagation? Is there some methods I can calculate the gradients for PyTorch quantized models?