I have a neural model
f that produces a score per class. I’d like to compute something like Class Activation Maps by computing gradient of each output wrt input (and I don’t need to compute any gradients with respect to weights of
Is there a batched way to compute gradients wrt to every output value? If found torch.autograd.functional.jacobian — PyTorch 1.10.0 documentation, but it seems that it will do forward many times as well. Is it true? Does calling it like below make sense?
import torch import torch.nn as nn model = nn.Linear(10, 20).requires_grad_(False) x = torch.zeros(5, 10).requires_grad_(True) y = model(x) print(x.shape) # torch.Size([5, 10]) print(y.shape) # torch.Size([5, 20]) grads = torch.autograd.functional.jacobian(lambda x: y, x, vectorize = True) print(grads.shape) # torch.Size([5, 20, 5, 10])