How to use .backward() on a loss vector that is non-scalar?

Hi! I’m new here. I find some similar questions, but I can’t find a good way to get the gradients when I backpropagate from the loss of a minibatch of size N.
Here’s the code:
Scores = model(X_var)

loss = -1 * torch.log§
for i in range(num_train):
loss[i].backward(retain_graph = True)
grads = X_var.grad
X_var is of shape (N, 3, H, W), and the loss is of shape(N,), how can I get the gradients of X_var directly with one line of code rather than a for loop?
Thanks!

You can pass a Tensor of ones or other weights to .backward. That is equivalent to a (weighted) sum as total loss.

Best regards

Thomas

Oh, thanks! It really works.