Is there a simple way to access and save the grad_input and grad_output as defined in the backward component of a custom torch.autograd.Function? I would like to do this for all the layers and I was wondering if there was something similar to
In theory, that is what the register_backward_hook() on nn.Module is supposed to give you.
Unfortunately, they are broken right now and we will hopefully be able to fix them later this year.
The way you can do this right now is to use register_forward_hook() on your nn.Module to access both the input and outputs and use register_hook() on these Tensors to access their gradients.
Am I using the hook incorrectly here or does my hook mess with my computational graph? Thanks I don’t have enough background or knowledge to deeply understand the solution in the other post or whether it even applies here. Thanks!