The gradient of input variables is None

Hello, everyone. I need to use the neural network in an unconventional way, in which I have to compute the gradient of the model output with respect to the input, but I always get a None.
My code is like this:

    model = torch.load('totalmodel.pth')
    model.eval()
    x = torch.tensor([1.,2.,3.,4.],device=device,requires_grad=True)
    y = model(x)
    y.backward()
    print(y)
    print(x.grad)

And the output is:


It shows that the grad_fn of output y is valid, and x requires_grad is True, why is the grad of x None?

I worked this out. I detach the input when I convert the data type of the input parameters… I quit the detach and it works…

Hi Tucik,

Out of interest how did you “quit the detach” for the input?

I don’t see this or the data type conversion in your code, but am facing the same problem,

Thanks

If the gradients are unexpectedly None, you could try the following simple checks -

  1. tensor.requires_grad == True
  2. tensor.is_leaf == True, or tensor.grad_fn is None;
    if it is not None it implies the tensor isn’t a leaf tensor and you might want to use retain_grad() on it.
  3. autograd’s gradient computation is not disabled using
    torch.no_grad() context manager
    torch.autograd.set_grad_enabled(False)
  4. You are not running any non-differentiable operation and/or breaking the computation graph.

Finally, you can use torchviz to visualize the tensor you are calculating the gradients of and ensure that the tensor you are accessing the .grad of is a leaf in the computation graph.