Accessing and saving grad_input and grad_outpu

Hi All,

Is there a simple way to access and save the grad_input and grad_output as defined in the backward component of a custom torch.autograd.Function? I would like to do this for all the layers and I was wondering if there was something similar to

net.conv11.weight.grad
net.conv21.bias.grad

for the grad_input and grad_output.

Thanks!

Hi,

In theory, that is what the register_backward_hook() on nn.Module is supposed to give you.
Unfortunately, they are broken right now and we will hopefully be able to fix them later this year.

The way you can do this right now is to use register_forward_hook() on your nn.Module to access both the input and outputs and use register_hook() on these Tensors to access their gradients.

Hi,

Here is the solution I’ve come up with for appending the grad_input after every loss.backward()

grad_input_list = []
def forward_hook_grad_saver(self, input, output):
     input[0].register_hook(input_gradient_appender)
def input_gradient_appender(grad):
     grad_input_list.append(grad)

model.linear2.register_forward_hook(forward_hook_grad_saver)

I think I’m having the same issue as posted here
Memory leak when using forward hook and backward hook simultaneously or something similar since my GPU runs out of memory.

Am I using the hook incorrectly here or does my hook mess with my computational graph? Thanks I don’t have enough background or knowledge to deeply understand the solution in the other post or whether it even applies here. Thanks!

1 Like