I want to extract the feature maps at different layers of a VGG model.
The VGG model is implemented as a nested sequence of blocks. Each elementary block is a sequence of layers. For example:
block3 = Sequence(
Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
ReLU(inplace)
Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
ReLU(inplace)
Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
ReLU(inplace)
To extract the activations in block3
for instance, i used the approach detailed here with get_activation(name)
function.
for name, module in model._modules.items():
if name == 'encoder':
for nname, submodule in module._modules.items():
if nname == 'block3':
for layer_name, layer in submodule._modules.items():
print(nname, layer)
model.encoder.block3.register_forward_hook(get_activation(layer_name))
output = model(x_input)
print(activation.keys()) # dict_keys(['1', '3', '4', '0', '2', '5'])
So far, so good. I can visualize the activations. However, they look exactly the same: when I compute the difference between the output activation of block3.0
and of block3.4
, I get 0
:
block3_0 = activation['0'].numpy()[0]
block3_4 = activation[4'].numpy()[0]
print(np.sum(block3_0 - block3_4)) # -> 0
Did I setup the forward hook properly so it “attached” to different layers?
Thank you.