What is the standard way to get the gradient value at a particular node in pytorch 0.4.1
When I try to access the gradient of a particular tensor in graph using the .grad attribute I get None.
Also is the None coming to free up memory. (I also tried using retain_graph=True in backward function but it didnt change anything ). Here is a past link to this kind of discussion but it mainly talks about Variable API.
After the backward call you tried to print the gradients directly or did you perform some other operations, e.g. optimizer.step()?
In the former case, could you post your model code? Maybe the computation graph is somehow detached at a certain point.