KeyError with get_activation

Hey,

Already got a lot of helpful things form this form so many thanks!

I found the following function from ptrblck to visualize a feature map:

activation = {}
def get_activation(name):
    def hook(model, input, output):
        activation[name] = output.detach()
    return hook

This worked perfectly for my ResNet50, and now i wanted to try this on the discriminator of a GAN.
This is model is made up like this:

class Discriminator128(nn.Module):
    def __init__(self, ngpu):
        super(Discriminator128, self).__init__()
        self.ngpu = ngpu
        self.main = nn.Sequential(
            # input is (nc) x 128 x 128
            nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*2) x 64 x 64
            nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 2),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*4) x 32 x 32
            nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 4),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*8) x 16 x 16
            nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 8),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*16) x 8 x 8
            nn.Conv2d(ndf * 8, ndf * 16, 4, 2, 1, bias=False),
            nn.BatchNorm2d(ndf * 16),
            nn.LeakyReLU(0.2, inplace=True),
            # state size. (ndf*8) x 4 x 4
            nn.Conv2d(ndf * 16, 1, 4, 1, 0, bias=False),
            nn.Sigmoid()
        )

    def forward(self, input):
        return self.main(input)

But when I try to get the activation in the following way:

act = activation['main[0]'].squeeze()

I get:

KeyError: 'main[14]'

Anybody who can help me out?

Thanks!

Could you post the code showing, how you’ve registered the hooks?
Also, it seems you are passing 'main[14]' as the key.

Like this:

netD.main[14].register_forward_hook(get_activation('main[14]'))
output = model(img)
torch.Size([1, 3, 128, 128])
print("main[14]") 
act = activation['main[0]'].squeeze()
print(act.type)
num_plot = 4
fig, axarr = plt.subplots(min(act.size(0), num_plot))
for idx in range(min(act.size(0), num_plot)):
    print(idx)
    axarr[idx].imshow(act[idx],cmap="gray")

The code seems to work:

nc = 3
ndf = 3
model = Discriminator128(0)
model.main[14].register_forward_hook(get_activation('main[14]'))
output = model(torch.randn(1, 3, 128, 128))
act = activation['main[14]']

Note that act will have the shape [1, 1, 1, 1], so that imshow won’t make much sense (plotting a single pixel).

Thanks for the reply!

I no longer get the KeyError but I know was wondering if you could help me visualize the feature map. Like you said the imshow doesn’t work.

The code now looks similar to something you wrote on a different toppic: Visualize feature map

So what would you think would be the best way to visualize the feature map for a given image?

The linked code should work.
However, the current layer (main[14]) only has one output pixel, so while the code will “visualize” this value, there might not be much information in this “image”.
I would suggest to select other layers with activation maps which are larger in their spatial size, so that you can visualize them accordingly. :wink: