Visualize feature map

The image you’ve posted is from a Krizhevsky et al. and shows the learned filter kernels.
You are not seeing a feature map, but 96 kernels of size 3x11x11.
To get a similar image, you can use this code snippet:

from torchvision.utils import make_grid

kernels = model.extractor[0].weight.detach().clone()
kernels = kernels - kernels.min()
kernels = kernels / kernels.max()
img = make_grid(kernels)
plt.imshow(img.permute(1, 2, 0))
10 Likes

How can get feature map not kernels ?

Would this approach work?

2 Likes

In the line:
img = data[idx, 0]
array “data” is referenced, although it is not defined before! - code is working ok, but how?..

data and output are both defined in the training for loop.
I’m just reusing them for visualization.

1 Like

@ptrblck how we can display output of layer in the original size of image. for example in UNet layer up2 (decoder section) the torch feature output size is torch.Size([1, 128, 120, 160]) how can I display it on the original size of image which is [1, 240, 320]?

Actually, I posted same question in separate thread https://discuss.pytorch.org/t/how-visualise-feature-map-in-original-size-of-input/39778

This is some super helpful code for me! One question I have is whether the activation captured by the hook is pre or post the application of the ReLU function? Thanks!

It would be pre-ReLU based on the registered hook.
However, since self.extractor uses inplace nn.ReLUs after the conv layers, the relu will be applied on the stored activation.

Hey @ptrblck:

What are there images in the link? https://towardsdatascience.com/how-to-visualize-convolutional-features-in-40-lines-of-code-70b7d87b0030

Are they activation maps or kernels?

Feature map visualization: https://youtu.be/RNnKtNrsrmg
Some images from the video:

1 Like

Hi @ptrblck,

I’m learning PyTorch. I tried your code for the activation but I got an error.

TypeError                                 Traceback (most recent call last)
<ipython-input-71-94fc5c43ff92> in <module>()
     12 axarr[0].imshow(img.detach().numpy())
     13 # print(pred.detach().numpy().shape)
---> 14 axarr[1].imshow(pred.detach().numpy())
     15 # Visualize feature maps
     16 activation = {}

4 frames
/usr/local/lib/python3.6/dist-packages/matplotlib/image.py in set_data(self, A)
    688                 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]):
    689             raise TypeError("Invalid shape {} for image data"
--> 690                             .format(self._A.shape))
    691 
    692         if self._A.ndim == 3:

TypeError: Invalid shape () for image data

I’m working on cifar 10 dataset and shape of my training data
X_train_torch is (50000, 3, 32, 32)

Thanks!

Could you post the shape of pred?
If it’s more than a single prediction, you should index it first, since imshow can only visualize a single array.

Why the shape of pred is empty?

print(pred.shape)
>>>torch.Size([])

Is your model predicting image-like outputs?
It seems preds might be just a scalar tensor.

Yes. Here is my code.

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 7, 1, 3)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 7, 1, 3)
        self.fc1 = nn.Linear(16 * 8 * 8, 128)
        self.fc2 = nn.Linear(128, 64)
        self.fc3 = nn.Linear(64, 32)
        self.fc4 = nn.Linear(32, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 8 * 8)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = self.fc4(x)
        return x


net = Net()

#training loop
for epoch in range(1,num_epoch+1):
    for i in range(0,len(X_train),batch_size):
        X = X_train_torch[i:i+batch_size]
        y = y_train_torch[i:i+batch_size]
        optimizer.zero_grad()
        y_pred = net(X)
        l = loss(y_pred,y)
        
       
        l.backward()
        optimizer.step()
    print("Epoch %d final minibatch had loss %.4f" % (epoch, l.item()))

#testing loop
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
    for i in range(0,len(X_test),batch_size):
        xt = X_test_torch[i:i+batch_size]
        yt = y_test_torch[i:i+batch_size]
        outputs = net(xt)
        _, predicted = torch.max(outputs.data, 1)
        c = (predicted == yt).squeeze()
        for i in range(4):
            label = yt[i]
            class_correct[label] += c[i].item()
            class_total[label] += 1


for i in range(10):
    print('Accuracy of %5s : %2d %%' % (
        classes[i], 100 * class_correct[i] / class_total[i]))

# normalizing the output
def normalize_output(img):
    img = img - img.min()
    img = img / img.max()
    return img

# Plot some images
idx = torch.randint(0, outputs.size(0), ())  
pred = normalize_output(outputs[idx, 0])
img = X_train_torch[idx, 0]
print(pred.shape)
fig, axarr = plt.subplots(1, 2)
axarr[0].imshow(img.detach().numpy())
axarr[1].imshow(pred.detach().numpy())
# Visualize feature maps
activation = {}
def get_activation(name):
    def hook(model, input, output):
        activation[name] = output.detach()
    return hook

net.conv1.register_forward_hook(get_activation('conv1'))
data = X_train_torch[0]
data.unsqueeze_(0)
output = net(data)

act = activation['conv1'].squeeze()

# fig, axarr = plt.subplots(act.size(0))
j = 0
for i in range(2):
  f,(ax1,ax2,ax3,ax4) = plt.subplots(1,4,sharex=True)
  ax1.imshow(act[j])
  ax2.imshow(act[j+1])
  ax3.imshow(act[j+2])
  ax4.imshow(act[j+3])

  j = j+4

In that case imshow won’t work, as it visualizes images.
What is your use case and what would you like to visualize or plot?

Got it. I’m trying to plot the activations.

sorry to bother you, I wonder how you solved this problem
print(pred.shape)

torch.Size([])

@ptrblck Yes, are there any solutions for this error. My img works but my pred variable returns empty and causes the invalid shape error. I am using the ImageNet dataset and it does output images.

If your model outputs a scalar tensor, you won’t be able to plot it as an image using plt.imshow.
How would you like to plot this scalar value?

CC @max_jiang