Compute gradient with respect to the input for high-dimensional output

Hello,
I am training a model on sequential labelling task and I want to compute the gradient of the output with respect to input. I want to do that only for specific positions and not all the sequence, how can I do this in the most efficient way ?

You can use the inputs= arguments to backward, or use .grad() torch.autograd.grad — PyTorch 2.3 documentation to specify which inputs you would like to compute derivatives with respect to.