Display feature maps of MNAS

Hi every one;

I need help, please. I have two images then I take the difference between these two images into one single image. I use Pretrained Mobile Neural Architecture Search (MNAS) to extract features only. I need to display the feature map of this image from the middle Conv layer of MNAS and the feature map of the final Conv layer before classifier.

I tried this snipped from @ptrblck but I have a problem.

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import models
import torchvision.transforms as transforms
import torchvision.datasets as datasets

import matplotlib.pyplot as plt
from PIL import Image
import numpy as np


class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.conv1 = models.mnasnet1_0(pretrained=True)
        self.convNet = nn.Sequential (self.conv1)
        self.convNet2= self.conv1.classifier=nn.Identity()
        
    def forward(self, x):
        
        x = self.convNet(x)
        return x

img1 = './dataset/11/frame001.jpg'
img2=  './dataset/11/frame004.jpg'
img1 = Image.open(str(img1))
img2 = Image.open(str(img2)) 
img1= transform=transforms.ToTensor()(img1)
img2= transform=transforms.ToTensor()(img2)


model = MyModel()


# Visualize feature maps
activation = {}
def get_activation(name):
    def hook(model, input, output):
        activation[name] = output.detach()
    return hook

model.conv1.register_forward_hook(get_activation('conv1'))

img= torch.abs(img1 - img2)
img.unsqueeze_(0)
output = model(img)


act = activation['conv1'].squeeze()
fig, axarr = plt.subplots(act.size(0))
plt.imshow(act)

I get this error


  File "C:\Users\Windows10\anaconda3\envs\Heyam\lib\site-packages\matplotlib\axes\_axes.py", line 5523, in imshow
    im.set_data(X)

  File "C:\Users\Windows10\anaconda3\envs\Heyam\lib\site-packages\matplotlib\image.py", line 709, in set_data
    raise TypeError("Invalid shape {} for image data"

TypeError: Invalid shape (1280,) for image data

Your activation (act) seems to have a wrong size and plt.imshow cannot visualize it.
Make sure you are trying to plot a tensor in the shape [height, width, 3] or [height, width] and pass it as a numpy array to plt.imshow via tensor.numpy().

thanks alot.
act shape

torch.Size([1280])

can you help how to change the shape of the act tensor ?

The activation seems to be the output activation of the mnasnet1_0, which would be the output logits.
If you want to visualize them, you could use e.g. plt.plot instead of plt.imshow, since the activation is not an image but just a flattened tensor containing the class logits.
Alternatively, you could reshape the activation to the aforementioned shape.

1 Like

I tired of this error. can you help me please. @ptrblck
My code

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader

import torchvision.transforms as transforms
import torchvision.datasets as datasets
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
from torchvision import models
import torchvision.transforms as transforms
import cv2


class MyModel(nn.Module):
   def __init__(self):
       super(MyModel, self).__init__()
       self.mod= models.mnasnet1_0(pretrained=True)
       self.convNet= nn.Sequential(*list(self.mod.children()))

       
   def forward(self, x):

       seqLen = x.size(0) - 1
       for t in range(0, seqLen):
           x1 = x[t] - x[t+1]
           x2 = self.convNet(x1)

img1 = './dataset/11/frame001.jpg'
img2=  './dataset/11/frame004.jpg'
img1 = Image.open(str(img1))
img2 = Image.open(str(img2)) 




trans= transforms.Compose([transforms.RandomResizedCrop(224),
                                               transforms.ToTensor(),
                                               transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])])


inpSeq = []
# img1=Image.fromarray(img1)
# img2=Image.fromarray(img2)

inpSeq.append(trans(img1.convert('RGB')))
inpSeq.append(trans(img2.convert('RGB')))
inpSeq = torch.stack(inpSeq, 0)

inpSeq=inpSeq.unsqueeze(0)
model = MyModel()


activation = {}
def get_activation(name):
   def hook(model, input, output):
       activation[name] = output.detach()
   return hook

model.convNet.register_forward_hook(get_activation('conv1'))

output = model(inpSeq)

act = activation['conv1'].squeeze()
plt.imshow(act[0])

error is here:

Traceback (most recent call last):

  File "C:\Users\Windows10\Desktop\Code1\exp.py", line 67, in <module>
    act = activation['conv1'].squeeze()

KeyError: 'conv1'

If I’m using a seqLen of 1 and a batch size of 2, the internal loop will not be executed and thus no hooks are called:

x = torch.randn(1, 2, 3, 224, 224)
model(x)

since your loop would be:

for t in range(0, 0):

and would thus not execute the self.convNet.
Note that your current forward method also doesn’t return anything, so you might want to change that as well.

However, if I’m using a seqLen of 2, I get a shape mismatch error:

x = torch.randn(2, 2, 3, 224, 224)
output = model(x)
> RuntimeError: mat1 and mat2 shapes cannot be multiplied (17920x7 and 1280x1000)

so I guess you might be facing the first issue.

The shape mismatch error is most likely raised, since you are rewrapping the child modules in an nn.Sequential container and are thus dropping the functional calls from the original forward.
I would not recommend to wrap arbitrary models into nn.Sequential unless you are sure that all modules are just executed in a sequential way without any conditions, functional calls, loops etc.

If I avoid the shape mismatch error and use:

def forward(self, x):
   seqLen = x.size(0) - 1
   for t in range(0, seqLen):
       x1 = x[t] - x[t+1]
       x2 = self.mod(x1)

with x = torch.randn(2, 2, 3, 224, 224), the forward hook is properly called and activation will be populated.

1 Like