Is there a simple way to access and save the grad_input and grad_output as defined in the backward component of a custom torch.autograd.Function? I would like to do this for all the layers and I was wondering if there was something similar to
for the grad_input and grad_output.
In theory, that is what the
register_backward_hook() on nn.Module is supposed to give you.
Unfortunately, they are broken right now and we will hopefully be able to fix them later this year.
The way you can do this right now is to use
register_forward_hook() on your nn.Module to access both the input and outputs and use
register_hook() on these Tensors to access their gradients.
Here is the solution I’ve come up with for appending the grad_input after every loss.backward()
grad_input_list = 
def forward_hook_grad_saver(self, input, output):
I think I’m having the same issue as posted here
Memory leak when using forward hook and backward hook simultaneously or something similar since my GPU runs out of memory.
Am I using the hook incorrectly here or does my hook mess with my computational graph? Thanks I don’t have enough background or knowledge to deeply understand the solution in the other post or whether it even applies here. Thanks!