I’m using the l2 norm of a pde as the loss function, the input are variables (x,y) and the output are some quantities (r, u, v), and I want to calculate the jacobian dr/dx, dr/dy, du/dx … and I’d like to do it in batch - (I believe it’s very slow otherwise, though cant be sure since I haven’t been able to try!). Currently I use the jacobian function and loop over each sample in the batch, as shown below. I’d like to do this without the for loop (or much more efficiently) can this be done as a simple batch operation? If so how?
x, y = batch
x.requires_grad_(True)
y_hat = self(x)
jacobian_list = []
for inp in x:
inp = inp.unsqueeze(dim=0)
jacobian = torch.autograd.functional.jacobian(
self, inp, create_graph=True, strict=False, vectorize=False
)
jacobian_list.append(jacobian)
gradients = torch.reshape(torch.stack(jacobian_list), (-1, 3, 2))
Thanks!