Visualize feature map

  1. The model I created is reconstructing the images just by its architecture. As you can see I’ve created a “bottleneck” in the model, i.e. the activations will get smaller, and after it I used transposed conv layers to increase the spatial size again.
    The last layer outputs the same shape as the input had. While this simple model is working on the MNIST dataset, it might be too simple for other more complicated models.

  2. You could need to add the decoding part, i.e. creating output images out of your latent vector. Depending on your encoder’s output, you might need to reshape it. If you need some help implementing it, could you post your encoder architecture?

  3. Yes, you are currently visualizing the activations, i.e. the output of intermediate layers. In case you want to visualize the kernels directly, you could use the following code:

# Visualize conv filter
kernels = model.conv1.weight.detach()
fig, axarr = plt.subplots(kernels.size(0))
for idx in range(kernels.size(0)):
    axarr[idx].imshow(kernels[idx].squeeze())
5 Likes