Derivatives of output w.r.t inpur

Hi, I am wanting to computer the derivatives of the output of a network with respect to the inputs.
Currently, I loop through each output and at each iteration I calculate y_pred[i,j].backward() and then grab the grad of the in put X. This seems inefficient. Is there a better way?

Code snippet is bellow. y_pred is the output of neural network. X is the input to the neural network.

for i in range(y_pred.shape[0]):
for j in range(y_pred.shape[1]):

    try: 
        X.grad.zero_()
    except AttributeError:
        pass

    y_pred[i,j].backward(retain_graph=True)
    
    print(i,j)
    print(X.grad.data)

Hi,

If you want to get all the values, I’m afraid there is no other way to do it :confused:
If you want to compute a linear combination of these values, then you can do better by using the linearity of the derivative and doing the linear combination on the outputs first then doing a single backward.

Note that you should never use .data in pytorch. You can replace it with .detach() if you want to break the graph. Or nothing if you just want to access the Tensor.