I’m writing an implementation of the GradCAM algorithm to visualize my neural net. In order to achieve this I’m making use of hooks as so:
...
def save_gradients(self, in):
self.gradients = in
...
def forward(self, x)
if i == target_layer:
x.register_hook(self.save_gradients)
out = x
...
The problem is this only sometimes works, depending on what layer I’m trying to look at. For example:
Sequential(
(Conv2d): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(BatchNorm2d): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.1, inplace=True)
)
will work, but not:
Sequential( [124/1941]
(Conv2d): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(BatchNorm2d): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(activation): LeakyReLU(negative_slope=0.1, inplace=True)
)
Is my intuition/knowledge wrong about something, hence why I can only look at certain layers, or is my code wrong elsewhere?