Computing gradients of outputs wrt to inputs

Hi,

I trained a neural network model and would like to compute gradients of outputs wrt to its inputs, using following code:

input_var = Variable(torch.from_numpy(X), requires_grad=True).type(torch.FloatTensor)
predicted_Y = model_2.forward(input_var)
predicted_Y.backward(torch.ones_like(predicted_Y), retain_graph=True)

where X is the input data, model_2 is the neural network. But I got None as input_var.grad. I googled it but most issues are related to that requires_grad was not set to True, which is not the case here. I wonder if anyone know what might be the problem?

Thank you!

Your input_var is an intermediate result, nor a leaf because you created a variable that requires_grad, and then .type creates a new one that is an intermediate result.

Also, your code has some other issues:

  1. You don’t need Variable wrappers anymore. Just write input_var = torch.as_tensor(X, dtype=torch.float).
  2. You should use model_2(input_var), rather than directly the forward function. It skips some hooks and can cause incorrect results in certain cases.
  3. retain_graph isn’t needed in your case.
1 Like