Hi, I am wanting to computer the derivatives of the output of a network with respect to the inputs.
Currently, I loop through each output and at each iteration I calculate y_pred[i,j].backward() and then grab the grad of the in put X. This seems inefficient. Is there a better way?
Code snippet is bellow. y_pred is the output of neural network. X is the input to the neural network.
for i in range(y_pred.shape[0]):
for j in range(y_pred.shape[1]):
try:
X.grad.zero_()
except AttributeError:
pass
y_pred[i,j].backward(retain_graph=True)
print(i,j)
print(X.grad.data)