How to save every visualization of conv2d activation layer?

Hello, I am trying to find a way to save every visualization of conv2d activation layer in my model to learn the function on each part of my model. So far I have used the method on [Visualize feature map] (Visualize feature map) but this requires me to specify which module to visualize. I was thinking that I can use looping on the parameters’s name for this problem but it contains some unnecessary naming like .0.weight on the end. Maybe someone has better solution for my problem?

This is the name of my whole parameters :

module.encoder.conv1.0.weight
module.encoder.conv1.0.bias
module.encoder.conv1.1.weight
module.encoder.conv1.1.bias
module.encoder.conv2.0.weight
module.encoder.conv2.0.bias
module.encoder.conv2.1.weight
module.encoder.conv2.1.bias
module.encoder.conv3.0.weight
module.encoder.conv3.0.bias
module.encoder.conv3.1.weight
module.encoder.conv3.1.bias
module.bottleneck.block1.main.0.weight
module.bottleneck.block1.main.1.weight
module.bottleneck.block1.main.1.bias
module.bottleneck.block1.main.3.weight
module.bottleneck.block1.main.4.weight
module.bottleneck.block1.main.4.bias
module.bottleneck.block2.main.0.weight
module.bottleneck.block2.main.1.weight
module.bottleneck.block2.main.1.bias
module.bottleneck.block2.main.3.weight
module.bottleneck.block2.main.4.weight
module.bottleneck.block2.main.4.bias
module.bottleneck.block3.main.0.weight
module.bottleneck.block3.main.1.weight
module.bottleneck.block3.main.1.bias
module.bottleneck.block3.main.3.weight
module.bottleneck.block3.main.4.weight
module.bottleneck.block3.main.4.bias
module.bottleneck.block4.main.0.weight
module.bottleneck.block4.main.1.weight
module.bottleneck.block4.main.1.bias
module.bottleneck.block4.main.3.weight
module.bottleneck.block4.main.4.weight
module.bottleneck.block4.main.4.bias
module.bottleneck.block5.main.0.weight
module.bottleneck.block5.main.1.weight
module.bottleneck.block5.main.1.bias
module.bottleneck.block5.main.3.weight
module.bottleneck.block5.main.4.weight
module.bottleneck.block5.main.4.bias
module.bottleneck.block6.main.0.weight
module.bottleneck.block6.main.1.weight
module.bottleneck.block6.main.1.bias
module.bottleneck.block6.main.3.weight
module.bottleneck.block6.main.4.weight
module.bottleneck.block6.main.4.bias
module.decoder.deconv1.0.weight
module.decoder.deconv1.1.weight
module.decoder.deconv1.1.bias
module.decoder.deconv2.0.weight
module.decoder.deconv2.1.weight
module.decoder.deconv2.1.bias
module.decoder.deconv3.0.weight
module.decoder.deconv3.1.weight
module.decoder.deconv3.1.bias

This is my code so far, I still havent found the correct way to loop the conv :

def visualize_activation(model,dl,output_folder):
    activation = {}
    def get_activation(name):
        def hook(model, input, output):
            activation[name] = output.detach()
        return hook

    
    for bix, data in enumerate(dl):
            face = data
            if use_cuda:
                face   = face.cuda()

            for conv in conv_in_model:
                predicted_face,_ = generator(face)
                act = activation['test_conv'].squeeze()
                num_plot = 20
                row = 4
                col = np.ceil(num_plot/row)
                fig= plt.figure(1)
                for idx in range(num_plot):
                    ax = fig.add_subplot(row,col,1+idx)
                    ax.imshow(act[idx].cpu())    
                plt.savefig(output_folder+bix+'_'+conv+'.png', bbox_inches='tight')
1 Like

You could iterate all modules, check if the current module is an nn.Conv2d modules, and register the hook using its name.
Here is a dummy code snippet for resnet:

model = models.resnet50()
for name, module in model.named_modules():
    if isinstance(module, nn.Conv2d):
        module.register_forward_hook(get_activation(name))

Note that you are registering the hook in the module to visualize the output activation.
The weight and bias parameter don’t give you the activation.
Let me know, if I misunderstood your use case.