Is there a way to trace backward()'s calculations?

I’m getting a value that is not what I expect based on my knowledge of backpropagation. I have a simple two layer neural network. Using register_backward_hook I am saving the “grad_output” value, which to my understanding is the gradient value coming into the layer from the following layer. This value is as expected at the output layer, but for the first layer it is not. Is there any way, possibly in Pdb or using retain_variables=True to track the exact calculations being done by autograd?

Backward hooks or retain_grad=True are the ways to go for this task.