The behavior of forward hook

Hi there,

Suppose we are using a forward hook to analyze a mid layer:

class Hook():
    def __init__(self,m):
        self.hook = m.register_forward_hook(self.hook_func)
    def hook_func(self,m,i,o):
        self.stored = o.detach().clone()
    def __enter__(self, *args): return self
    def __exit__(self, *args):
        self.hook.remove()

for instance layer4 in resnet. Once we get the feature map feature_map.features, can we further use this to calculate the gradient of torch.norm(feature_map.features) w.r.t model parameters?

Will the computation graph be retained? What is the correct way to accomplish this task?

Thanks!

Hi,

If you use .detach() in the hook, then no the graph will not be kept.
But if you don’t do that, then yes, the Tensors will require gradients just like the ones in the forward and you can do whatever you want there :slight_smile:

1 Like

Thank you. I will remove the .detach() method.