Hi there,
Suppose we are using a forward hook to analyze a mid layer:
class Hook():
def __init__(self,m):
self.hook = m.register_forward_hook(self.hook_func)
def hook_func(self,m,i,o):
self.stored = o.detach().clone()
def __enter__(self, *args): return self
def __exit__(self, *args):
self.hook.remove()
for instance layer4 in resnet. Once we get the feature map feature_map.features, can we further use this to calculate the gradient of torch.norm(feature_map.features)
w.r.t model parameters?
Will the computation graph be retained? What is the correct way to accomplish this task?
Thanks!