In the PyTorch official intro on autograd, Q is a vector output of element-wise operation of a function on two vectors a and b.
How to calculate the gradients of a and b if the function is not element-wise operation, i.e. each element in the vector Q is obtained by different calculations on different elements of a and b, e.g. Q = 3a**3 - b**2, and Q = a**2 - 3b**3 ?
I did the following test with one vector input x and one vector output y. However, the gradient of x is not able to be calculated.
import torch x = torch.tensor([1., 2.], requires_grad=True) y1 = 2*x**2 + x y2 = 3*x + 4*x**3 y = torch.tensor([y1, y2]) external_grad = torch.tensor([1., 1.]) y.backward(gradient=external_grad) x.grad
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn