Interpreting output of backward hook

Hi

I have a question about the output of backward hook on the Mixed_6e.branch1x1.conv layer of pytorch pre-trained inception_v3.
(Mixed_6e): InceptionC(
(branch1x1): BasicConv2d(
(conv): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)

)
I registered forward and backward hook on this layer. As a result:
The shape of module_out in forward hook is [5,192,17,17] which I thinks is the activation maps of 5 images passing through this layer. However, the output of backward is not by any means clear for me. And I really appreciate it if anyone could please explain to me.
The out put of module_in of backward hook:
len of module_in 3
shape of inside module_in torch.Size([5, 768, 17, 17])
shape of inside module_in torch.Size([192, 768, 1, 1])
None found for Gradient
The output of module_out of hook:
shape of inside module_out torch.Size([5, 192, 17, 17])

In case you were using register_backward_hook, note that its behavior was broken for some modules which is why it’s deprecated now. Use register_full_backward_hook now. The docs explain the expected inputs and outputs.

Thank you @ptrblck for your response. I have read the link that you sent and I have a follow up question. What is the difference between grad_out as a result of registering register_full_backward_hook on a module such as conv2d and A.register_hook() (A as a output of a module that is called activation)? Are both equal but come from different modules?
My purpose is computing the gradient respect to the outputs of filters in a specific layer. Which one of these functions, register_full_backward_hook or register_hook on a tensor do you suggest?

Thank you in advance for your attention given to my problem.

register_full_backward_hook will give you the grad inputs and outputs, which are passed into this module and forwarded to the next module.
register_hook is used on parameters and will return the gradient w.r.t this parameter.

Thank you again @ptrblck for your explanations. Sorry, the inputs and outputs that you mentioned means input activations to this module and the output activations of this module? And the parameter that you also mentioned means weights and bias of this module? Am I right? Because, based on the code snippet provided by @albanD in this discussion How do hooks work?, the register_hook function is registered on the output activations of the module.

Thank you so much.

No, the gradient inputs and outputs created during the backward pass. The figures in this post give you a better idea of the forward activations and the gradients during the backward pass.

Yes, I meant the parameters of the module such as weight and bias, but register_hook can be used generally on tensors, not only parameters.