Register_hook is not called for all registered tensors

Hi
The VGG19 has 16 Conv. layer. So during the forward pass I register a hook_register on the outputs (activation maps) of each conv. layer + the last max_poolin layer as well as record the activation maps in a list just after calling the register_hook. As a result, after forward pass operation I have activation maps for 17 layers. However, after calling backward on the model, only the gradients of the output of five last layer are recorded. To put it differently, the function that hook is registered on it is just called 5 times. Why it is not called for all the outputs of the layers which have a registered function? How can I know the hook_registered is exactly called for all the tensor that I want?

Thank you in advance.

This is the function which is called when I pass image through the model,

def save_gradient(self, grad):
print(‘calling gradients’)
print(grad.shape)
self.gradients.append(grad)

def __call__(self, x):
    outputs = []
    self.gradients = []
    for name, module in self.model._modules.items():
        if name not in self.target_layers:
            with torch.no_grad():
                x = module(x.to(self.device))
        else:
            x = module(x.to(self.device))
            x = x.requires_grad_(True)
            x.register_hook(self.save_gradient)
            outputs += [x]
    return outputs, x

Hi,

The hook are only called for the part of the model that you backpropagate through.
And if you run the model in no grad mode, it will not compute gradients for that part and break the link between the input and output. So the gradient won’t flow backward past that part.

1 Like

Thank you @albanD or your response. I added the term torch.no_grad, since after the convolutional layer there is a module named Relu that passing the input through this layer throws below error:

RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

How can I register a hook on the output of each convolutional layer without facing such error when the outputs are passing through Relu function.

Thank you very much for your time and consideration.

Well that errors means that you’re modifying inplace something you should not.
If you don’t use no_grad at all, then this means that you’re modifying an input inplace which is not allowed. You should add a clone before that op or make that op not inplace to solve that issue.

Thank you very much @albanD for your time and explanations.
I used the torch.clone before and after the Relu module and the problem solved.

Thank you.