Inspecting gradients w/ backwards hooks

I am trying to use Pytorch to inspect the values of gradients at each layer of a simple model. I am doing this using a backwards hook at each layer. My function currently prints out the same value for input and output gradients, so clearly I am misunderstanding something. In my hook, why are the values for both input and output the same. My hook function is as follows:

def grad_hook(mod, inp, out):
    print("")
    print(mod)
    print("-" * 10 + ' Incoming Gradients ' + '-' * 10)
    print("")
    print('Incoming Grad value: {}'.format(inp[0].data))
    print("")
    print('Upstream Grad value: {}'.format(out[0].data))

And an example output for a linear layer:

Linear(in_features=3, out_features=5, bias=True)
---------- Gradient Values ----------

Incoming Grad value: tensor([-1.3997, -2.1604,  0.8113, -1.0236,  0.3797])

Upstream Grad value: tensor([[-1.3997, -2.1604,  0.8113, -1.0236,  0.3797]])
--------------------------------------

Hi,

As specified in the doc, the backward hooks on nn.Modules are not working as expected at the moment.
You can use Tensor hook to get the values you want. By adding register_hook() to the Tensors you want to inspect either in the forward pass code or by using an nn.Module forward_hook to access the input/output values of a given nn.Module.

Thank you for your reply. I saw the bug report on this Github issue (https://github.com/pytorch/pytorch/issues/16276). I’ll use register_hook on individual tensors for now.