What is the standard way to get the gradient value at a particular node in pytorch 0.4.1
When I try to access the gradient of a particular tensor in graph using the .grad attribute I get
Is torch.autograd.grad the approach I should follow.
Also is the None coming to free up memory. (I also tried using retain_graph=True in backward function but it didnt change anything ). Here is a past link to this kind of discussion but it mainly talks about Variable API.