Visualize feature map

Hello,
Has anyone done this (e.g. hook) with C++ and Libtorch?

Thanks

Hope this snippet will help you to plot 16 outputs in the subplot:

fig, axarr = plt.subplots(4, 4)
k = 0
for idx in range(act.size(0)//4):
    for idy in range(act.size(0)//4):
        axarr[idx, idy].imshow(act[k])
        k += 1

For example, if we consider @ptrblck code snippet and change the conv2 layer as 16 feature maps for the visualization, the output could look like:

feature_map

Hi, I have one doubt in addition to this.
While calculating the loss i.e. nn.CrossEntopyLoss(output,labels) in this, we use output of last layer or final output of the network.

But I want to use feature maps of each convolutional layer in loss calculation as well in addition to above calculation.
Can you please throw some light, how can it be done?

It depends which target you would like to use for the intermediate activations.
Since they are conv outputs you won’t be able to use e.g. nn.CrossEntropyLoss directly since these outputs don’t represent logits for the classification use case as they have a different shape.
However, you could follow a similar approach as seen in Inception models, which create auxiliary outputs forked from intermediate outputs and use these aux. output layers to calculate the loss.

Actually I want to use discriminative loss in addition to cross entropy loss. The discriminative loss will be based on feature maps of each convolutional layers and cross entropy loss remains as usual. But I didn’t find any source so far, how to do it!

You could return the intermediate activations in the forward as seen here:

def forward(self, x):
    x1 = self.layer1(x)
    x2 = self.layer2(x1)
    out = self.layer3(x2)
    return out, x1, x2

and then calculate the losses separately:

out, x1, x2 = model(input)
loss = criterion(out, target) + criterion1(x1, target1) + criterion2(x2, target2)
loss.backward()

or alternatively you could use forward hooks to store the intermediate activations and calculate the loss using them. This post gives you an example on how to use forward hooks.

Thank you for the reply @ptrblck.

I actually created a new thread because the discussion deviates slightly and I have few more doubts. Can you please look at this?

If I do something like

Loss = my_Loss (x1, original_images)

for every layer the size of tensor will be different, thus it will give an error.
i.e. after 1 convolutional layer, x1 has a size of [batch_size, num_features, h,w] = [50, 12, 28, 28]
While that of original_images = [50, 3, 32, 32].
So, how to calculate loss then!

And one my more doubt is, can we do something to get all features (num_features=12 in this case) to be as distinct as possible from one another?

You could use additional conv/pooling/interpolation layers to create the same output size of the intermediate activations as the target (or input) tensor.
Alternatively you could also change the model architecture such that the spatial size won’t be changed, but again it depends on your use case.

I don’t know which approach would create activations meeting this requirement.

As far as we provide some kernel size, there will be a reduction in the size.

Size=( (W-K+2P)/ S )+1

I really don’t get your point.
Can you please give dummy examples of both the scenarios you suggested?

Yes, you cannot simply calculate a loss between arbitrarily sized tensors, so you would need to come up with a way to calculate the loss between intermediate activations (with different sizes) and a target.

To get the same spatial size you could e.g. use pooling/conv/interpolations while you would still have to make sure to create the same number of channels.
This can be done via convs again (or via reductions), but it depends on your use case.

Here is a small example:

act = torch.randn(1, 64, 24, 24)
target = torch.randn(1, 3, 28, 28)

# make sure same spatial size
act_interp = F.interpolate(act, (28, 28))

# create same channels
conv = nn.Conv2d(64, 3, 1)
act_interp_channels = conv(act_interp)

print(act_interp_channels.shape) # has same shape now
loss = F.mse_loss(act_interp_channels, target)

Thanks for the reply.
This is something new, I will try it.

So, can we say like pooling and interpolate corresponds to downscaling and upscaling?

And in another approach, I have a output tensor(still intermediate activations after each convolutional layer) of say size (batch_size= 100, channels= 32 , 1, 1) and target(which are labels now) of size (batch_size= 100, 1) provided we have no. of classes = 10.

So, for loss calculation i.e. Loss = lossFunc(output, labels), Is there any loss function, which can reduce the dimensionality from 32 to 10 and then calculate the loss?
Or I have to use a conv/linear layer for that and then calculate the loss? Because in this case it adds some more weights, which I want to avoid.

Yes, you could name it like that. Note that you could down- and upscale using an interpolation method.

I assume the shape should be [100, 10]?
If so you could either use a trainable layer (as you’ve already described) or try to reduce the channels using e.g. mean, sum etc.
However, a reduction might be a bit tricky in this use case, as the target size of 10 doesn’t fit nicely into the channel size of 32.
Generally, all operations would be allowed which map 32 values to 10.

1 Like

ok that looks super cool :smiley: can you give some tips, code, steps of how you did it :)?

Hi, Thank you so much for this code! I am a beginner and learning CNN by looking into the examples. It really helps. Can you guide me on how can I visualize the last layer of my model?

If you would like to visualize the parameters of the last layer, you can directly access them e.g. via model.last_layer.weigth and visualize them using e.g. matplotlib.
The activations would most likely be the return value and thus the model output, which can be visualized in a similar fashion. If you want to grab intermediate activations, you could use forward hooks and plot them after the forward pass was executed.
Let me know, if that answers the question.

1 Like

Hi, thanks as per your comments I have implemented. Can you tell me how can we interpreted the results?
image

As per your comment I got the weights of fully connected layer 1 and the size is this torch.Size([500, 800])
but I am unable to plot it as getting this error TypeError: Invalid shape (800,) for image data. I think the problem is because fully connected layer is flattened so it is 1-D array therefore I am getting this error? What is the solution for this?

You should be able to visualize a numpy array in the shape [500, 800] using matplotlib in the same way as done in your previous post. I guess you might be indexing the array and could thus try to plot it directly.
I don’t know what kind of information you are looking for but the color map would indicate the value ranges and you could try to interpret them as desired.

1 Like

(layer1): Sequential(
(0): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(layer2): Sequential(
(0): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(fc1): Linear(in_features=800, out_features=500, bias=True)
(dropout1): Dropout(p=0.5, inplace=False)
(fc2): Linear(in_features=500, out_features=10, bias=True)
)>
This is my CNN model and whenever I tried to fetch the weights for layer 1 I am getting this error: ModuleAttributeError: ‘Sequential’ object has no attribute ‘weight’

Can you please help me with this error?