Backward hook sometimes not called


I want to use forward and backward hooks on a single layer in the VQA Ban model. The forward hook works as intended. The backward hook however is simply not called at some layers when experimenting with different layers.
VQA Ban consists of a BAN network with a resnet152 and detectron backend.
With some layers of BAN it works, with some not. At resnet152 no layer works.
The backward hook is simple never called…
I tried the same with Pythia vqa which consists of the Pythia network with a resnet152 and detectron backend. Here every backward hook works as intended without a problem.

Any idea why this could be the case?

Here is the code I use:
(“name” is the name of the layer I want to hook)

    def forward_hook(key):
        def forward_hook_(module, input, output):
            # Save featuremaps
            output = detach_output(output, 0)
            if not isinstance(output, dict) or not isinstance(output, BoxList):
                self.fmap_pool[key] = output

        return forward_hook_

    def backward_hook(key):
        def backward_hook_(module, grad_in, grad_out):
            # Save the gradients correspond to the featuremaps
            self.grad_pool[key] = grad_out[0].detach()

        return backward_hook_

    # If any candidates are not specified, the hook is registered to all the layers.
    for name, module in self.model.named_modules():
        if self.candidate_layers is None or name in self.candidate_layers:


I would guess that this most likely happens because of interactions between views and inplace operations.
In general I would discourage using module backward hook. You can see in the doc a warning that their current behavior is not always as expected.

In your case, it would be more reliable to use t.register_hook() directly on the Tensors of interest. Note that you can do this inside your forward hook.