In the PyTorch official intro on autograd, Q is a vector output of element-wise operation of a function on two vectors a and b.

How to calculate the gradients of a and b if the function is not element-wise operation, i.e. each element in the vector Q is obtained by different calculations on different elements of a and b, e.g. Q[0] = 3*a[0]**3 - b[0]**2, and Q[1] = a[1]**2 - 3*b[1]**3 ?

I did the following test with one vector input x and one vector output y. However, the gradient of x is not able to be calculated.

```
import torch
x = torch.tensor([1., 2.], requires_grad=True)
y1 = 2*x[0]**2 + x[1]
y2 = 3*x[0] + 4*x[1]**3
y = torch.tensor([y1, y2])
external_grad = torch.tensor([1., 1.])
y.backward(gradient=external_grad)
x.grad
```

Error:

`RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`