Understanding deep network: visualize weights

@herleeyandi I know it is quite late but you might find it useful. https://github.com/utkuozbulak/pytorch-cnn-visualizations

You can play around with gradients - guided gradients to visualize other layers instead of the first one.

9 Likes

Wow Thank you so much @uozbulak for your works. By the way can we also visualize every convolution result like in here . Is it just forward an image then take the tensor result every convolution and plot the image?, I am still confuse.

Yeah, you can visualize every filter of every layer. If it helps, I can include that in the repository in the upcoming days as well.

2 Likes

Of course it will help a lot especially for the beginner like me. Thank you for your works.

Hello herleeyandi,
My implementation using make_grid for visualization filters:

pytorch_visualization.ipynb

3 Likes

@johnny5550822 I want to know if you have done the ā€œdeconvolutionā€ for visualization by pytorch. Correlation with the paper "Visualizing and Understanding Convolutional Networks"
or give any suggestion?
Thanks!

Is there an update on the repo?

I think @uozbulak repo (see messages above) should be sufficient for almost any visualizaiton task.

Hi,
Newbie here, can anyone explain more what we are looking at? I understand these are the kernels from VGG, but how is this a saliency ? Would I get different filters on my own dataset? I cannot see how this helps understanding more what the network is learning.

Thanks!

it great ~ thanks for your help

@uozbulak greetings! have you tried learned-mask visualization as in ā€œInterpretable Explanations of Black Boxes by Meaningful Perturbationā€ paper? I found itā€™s already implemented in pytorch https://github.com/jacobgil/pytorch-explain-black-box. I think itā€™d better to have all visualization tools in one place with same ecosystem code style.

Thanks for your simple but robust code for visualization. Remember that tensor is in TxCxHxW order so you need to swap axis (=push back the channel dim to the last) to correctly visualize weights. As such, the second to the last line should be

tensor = layer1.weight.data.permute(0, 2, 3, 1).numpy()

This should be a fix for other networks like resnet in torchvision.
(Hate to revive an old topic but I thought it worth fixing this for future reference).

1 Like

how can I do it for 3D convolutions?

works for me, thanks

print(model.conv1.weight.data)

works. Seems like it should be weight, not weights

Hereā€™s one short and sweet way of getting some kind of visualization, although I havenā€™t checked it at all for accuracy. I just grabbed the weight data from my chosen layer, made a grid with torchvision.utils.make_grid, made it a little bigger, then imshowed the transposed version of it. I figure, if you just want something quick, why over-engineer it? From here you can add to it and customize it all you want.

w = model.conv1.weight.data
grid = utils.make_grid(w, nrow=10, normalize=True, scale_each=True)

plt.figure(figsize=(10, 10))
plt.imshow(grid.permute(1, 2, 0))

or you can write the following for the last line:

plt.imshow(grid.numpy()[0])

or

plt.imshow(grid[0,:])

They all do the same thing, Iā€™m not sure which is best. Can anyone verify and/or improve on this? It seems simple and straight forward.

There is a typo

should be m.weight.data