Pytorch 0.4.1 getting gradient at a particular node

What is the standard way to get the gradient value at a particular node in pytorch 0.4.1
When I try to access the gradient of a particular tensor in graph using the .grad attribute I get None.

Is torch.autograd.grad the approach I should follow.

Also is the None coming to free up memory. (I also tried using retain_graph=True in backward function but it didnt change anything ). Here is a past link to this kind of discussion but it mainly talks about Variable API.

You have to call the backward pass before accessing .grad, otherwise it will be None:

model = nn.Linear(10, 2)
x = torch.randn(1, 10)
output = model(x)
output.mean().backward()
print(model.weight.grad)

Yeahh I called the backward function!!

After the backward call you tried to print the gradients directly or did you perform some other operations, e.g. optimizer.step()?
In the former case, could you post your model code? Maybe the computation graph is somehow detached at a certain point.

Hii, I forgot posting my code after replying earlier

This is the relavant code

         data, target = data.to(device), target.to(device)
         optimizer.zero_grad()
         output = model1(data)
         finalOutput = model2(data - factor*output)
         loss = F.nll_loss(finalOutput, target)
         loss.backward()
         print(output.grad)
         print(finalOutput.grad)
         optimizer.step()

None is being printed!!

Have a look at this thread.
register_hook is probably, what you want.

Oh I looked at that link but missed that answer while skimmin:sweat_smile::sweat_smile::sweat_smile:
Thanks!!