The following code correctly calculates the derivative of the full_model w.r.t each parameter in x

```
import torch
def loss(x): return torch.sum(torch.pow(x, 3))
x = torch.tensor([1, 3, 5], dtype=torch.float64, requires_grad=True)
loss_out = loss(x)
first_derivative = torch.autograd.grad(loss_out, x, create_graph=True)[0]
```

first derivative = [3, 27, 75] = 3*x^2

However, if I want to only calculate the derivative with respect to a single variable, running with a modified final line throws an error

`first_derivative = torch.autograd.grad(loss_out, x[0], create_graph=True)[0]`

RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

I must be missing something quite basic about pytorch.

Any indexing I perform of x (such as x[0:3]) returns this error, claiming that the variable is not used in the graph.

Can anyone help me see what I’ve done wrong?

Thanks