Find gradient of output with respect to part of input

I try to find gradient of output with respect to part of input using torch.autograd.grad as follows

x = torch.tensor(np.array([1, 2]), requires_grad=True)
y = torch.sum(x)
torch.autograd.grad([y], [x[0]])[0]

But I got error
One of the differentiated Tensors appears to not have been used in the graph.

If I change last line to

torch.autograd.grad([y], [x])[0]

then everything is fine, but it means that I find gradient with respect to all inputs, not part of them.

So my question - is it possible to compute gradient of output with respect to part of input?