Accessing and saving grad_input and grad_outpu

Hi,

Here is the solution I’ve come up with for appending the grad_input after every loss.backward()

grad_input_list = []
def forward_hook_grad_saver(self, input, output):
     input[0].register_hook(input_gradient_appender)
def input_gradient_appender(grad):
     grad_input_list.append(grad)

model.linear2.register_forward_hook(forward_hook_grad_saver)

I think I’m having the same issue as posted here
Memory leak when using forward hook and backward hook simultaneously or something similar since my GPU runs out of memory.

Am I using the hook incorrectly here or does my hook mess with my computational graph? Thanks I don’t have enough background or knowledge to deeply understand the solution in the other post or whether it even applies here. Thanks!

1 Like