I am interested in calculating the gradient of some arbitrary embedding output (B, C’, H’, W’) with respect to the input (dimensions of B, C, H, W). However, I only want to find the gradients for a specific subset of the input (e.g. input[:, :, y1:y2, x1:x2]).
Consider the following example code:
output = model(images)
grads = torch.autograd.grad(outputs=output, inputs=images[:, :, y1:y2, x1:x2], grad_outputs=torch.ones_like(output))
When trying to find grads, I error out with the following:
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior. My understanding (from this) is that the error results from doing images[:, :, y1:y2, x1:x2], which returns a copy of the original images and thus is not present in the graph. How can I get around this?
NOTE:
I am not interested in taking a crop of the gradients. My image is of large spatial resolution, but I am only interested in the gradients associated with a certain patch of the image. Calculating the gradient wrt the entire image is expensive.
So this solution will not work:
output = model(images)
grads = torch.autograd.grad(outputs=output, inputs=images, grad_outputs=torch.ones_like(output))
grads = grads[:, :, y1:y2, x1:x2]