How to get input&output tensor of a module in backward process

Hi everyone,

I’m working on a project which requires me to get input and output tensors of intermediate layers for further analysis. However, I want to get them in backpropagation process for the convenience of analysis, although they are calculated in forward process.
The register_backward_hook function might be useful, but it only return grad_input and grad_output tensors, which are not the tensors I want. Can anyone give some advice to me? Any information will be appreciated.

May I know why you need it in the backpropagation process?
You can simply save the forward call results for intermediate layers using register_forward_hook and then use it in register_backward_hook.

2 Likes

Hi mailcorahul,

Thanks for your advice!
I want to implement an algorithm which calculates effective path of neural network, which is proposed by a recent paper. In fact I don’t really need to do it in backpropagation process, but as the algorithm processes the network from output layer to input layer, I think implementing it in backpropagation process might be natural.
However, I still have some questions on that solution. First, where should I put the intermediate layers’ results, since the hook doesn’t need ctx as a parameter? Besides, sometimes a module will be used several times in a model, will that cause some problems?

You can store the forward results in an attribute. Simply saving resnet18’s output after first 7x7 conv…

class Net(nn.module):
    def __init__(self):
        self.net = models.resnet18(pretrained=True)
        self.remove_handle = self.net._modules['relu'].register_forward_hook(self.forward_hook)
        self.forward_output = None
    def forward_hook(self, module, input, output):
        # save any modules output in an attribute and later use it in backprop.
        self.forward_output = output

Afraid I won’t be able to answer your question on 'will multiple calls to a module cause problems" since I don’t have much idea about the paper(on effective path of neural network) you’re talking about. But regarding a module being called several times, the forward hook will be invoked on every call.

Thanks very much for your help! I’ll have a try.

Hi, I made a library to get the intermediate results, minimal modifications to your model (none for simple use cases) and if a module is called more than once all the results are saved. Check it out. Pytorch-Intermediate-Layer-Getter