How to have vector of gradient for each parameter?

I know that we can’t call backward on vector Tensor without reducing it to a scalar. However, I also need the gradient to be a vector, with each column is the gradient of each column of the Loss with respect to the parameters, i.e.,

import torch
x = torch.tensor([4.0], requires_grad=True)
L1 = torch.sin(x) * torch.cos(x)
L2 = torch.sin(x**2) * torch.cos(x/2)
L = torch.stack([L1, L2])
L.backward()
print(x.grad)

I hope that x.grad equals [dL1/dx, dL2/dx].
Is there any way I can do this in a simple manner, without copy and pasting to a temporary variable multiple times (as many as the length of the Loss Tensor)?

Hi Kadek!

Does autograd’s jacobian() do what you want?

Best.

K. Frank

1 Like

Yes I think for the above toy example, it turns out Jacobian is enough. However, for my real use-case which is for an nn.Module, it’s not enough. As the thread here suggest Get gradient and Jacobian wrt the parameters , we have to make a new function that takes the model params as the input, because Jacobian computed the gradient w.r.t. input. I will try it first. Thank you