Extract features from layer of submodule of a model

Is that possible to plot the output layers and weight of layers as well? I did this, and without considering plot weight the error is ValueError: too many values to unpack (expected 2) however it print out the activation output.
and also with weight layer is complaining that AttributeError: 'BaseConv' object has no attribute 'weight'

  activations = {}
    def get_activation(name):
        def hook(model_1, input, output):
            activations[name] = output.detach()
        return hook  
    #weights = model.init_conv.weight.data.numpy()
    outputs = model(t_image)

Could you point me to the line of code throwing the ValueError?
Is the print statement working?
Also, could you post your model architecture, so that we can check why the AttributeError is thrown?

1 Like

Sorry for the delay in replying. The error is coming from plt.matshow(activations['init_conv'])
Yes, print statement is working.
I am using your UNet implementation.

Could you print the shape of these activations?
If these activations have more than 3 channels, you would need to split them in order to visualize them as an image.

1 Like

@ptrblck the shape of activation is

torch.Size([1, 5, 1024, 1024])

It is more than 3 channel, how can I split them in order?

I also had a problem to print out the weight layer which is complaining AttributeError: 'BaseConv' object has no attribute 'weight'. any ideas?

You could squeeze the batch dimension and use torch.split to get each channel separately:

imgs = activations['init_conv'].squeeze().split(1, 0)
for img in imgs:

BaseConv has two conv layers, so you can get the weights using e.g. model.init_conv.conv1.weight and model.init_conv.conv2.weight.

1 Like

@ptrblck Thanks a lot. All is working now. I am wondering why we need to convert a torch tensor into a numpy array?

Good to hear it’s working now!
Well, it’s not really necessary in this case, as plt.imshow will also visualize the tensor, but I’m just used to work with numpy in matplotlib. Maybe that’s my way of interpreting the “Explicit is better than implicit” part of the Zen of Python. :wink:

1 Like

@ I see :slight_smile: Could you please give me an example when it’s necessary to do this? The only thing I found in doc is this page but couldn’t find out when it’s necessary to do this.

Generally I would recommend to get the numpy array, if you would like to use directly numpy methods or libs which are heavily dependent on numpy. I don’t really like mixing up both world implicitly.
E.g. while plt.imshow works fine, it seems that plt.plot won’t take a tensor input, since the ndim attribute is missing.

1 Like

@ptrblck I assume after training for example 10 epochs if I visualise the weight and output of each layer, it will display me the results of the last epochs. Is that correct? How can I get the results of epoch 1 or 2?

You could deepcopy the activation dict after these epochs and visualize them afterwards.

1 Like

Hi, the code @ptrblck provided was really usefull. Thanks for that.
I have two additional questions:

  1. I want to extract the features from a layer for all the testing data. Right now I am appending each batch output into a list as follows:
activation = {''init_conv'':[]}
def get_activation(name):
    def hook(model, input, output):
    return hook

Is there a clever way of getting all the features from all the batches in the same torch.tensor?
For now my workaround is to create a single tensor from the list of tensors generated by the hook, but it is a slow process since I have to torch.cat each element of the list.

  1. Is there a way of activating or deactivating the hooks, without the need of using the remove() method? Because in my case I only need the hooks for the testing, not the training.

If there is something not clear enough please let me know, english is not my native language.
Thanks in advance!

Hi!. I have a similar doubt. I’ve been using the definition that you have post, but I am not sure if I have understood it well how it works. I am working with an autoencoder class (the one I have attached) and I want to extract the outputs of the bottleneck (in my case the layer 3) so I am using model.layer3.register_forward_hook(get_activation(‘layer3’)). Am I using it correctly?.Thanks

Yes, the usage looks correct and registering the hook to layer3 should work.
Do you see any errors or any other unexpected behavior?

Okey, thanks a lot!. I have no errors, I didn’t know if I was using it correctly.