How to visualize fully connected layer output?

how can I visualize the fully connected layer outputs and if possible the weights of the fully connected layers as well,

You could try to plot them with matplotlib:

activations = {}
def get_activation(name):
    def hook(model, input, output):
        activations[name] = output.detach()
    return hook

model = nn.Sequential(
    nn.Linear(10, 25),
    nn.ReLU(),
    nn.Linear(25, 2)
)

weights = model[0].weight.data.numpy()
model[0].register_forward_hook(get_activation('layer0'))

x = torch.randn(1, 10)
output = model(x)

plt.matshow(activations['layer0'])
plt.matshow(weights)

Would this work for you or are you looking for another type of visualization?

1 Like

in this case are you plotting the output of the fully connected layer? i have never used register_forward_hook, what does that do? thank you for your help btw

I register get_activation for the first layer and in the function itself I store the output in the activations dict.
The dict stores all outputs with the corresponding keys.

then you’re plotting the output of layer one, but how is it being plotted, when it is a flat vector 25*1, are you plotting it as such?

Yes, I’m plotting it as an “image”.

ok i see, thank you! I’m trying to visualize the weights as well as the outputs in a convolution like manner, but im having difficulty doing that. That was my main goal of asking this question but i see i did a poor job of communicating it

What do you mean by “in a convolution like manner”?
Could you link to a visualization you’ve seen like this?

hello there, could this be used to visualize the relu output? i mean each layer has relu, can we get visualization of relu?

regards
thanks

Sure! If you are using a nn.Sequential model, you can just register the forward hook to the nn.ReLU module.
In my example just call

model[1].register_forward_hook(get_activation('layer0_relu'))`
...
plt.matshow(activations['layer0_relu'])
1 Like