How can i print a model summary that inlcudes information about connections?

Hi. When printing my model using print(model), I only get the layer names that include parameters to learn, but i can’t see the connections/relations between these layers. For my problem, It seems that some layers are not getting trained because they are always outputting the same (even after training for 50+ epochs). Therefore I need to check what these layers are exactly connected to in the computation graph. Is there any way in PyTorch to visualize the connection between these layers? In other words, can I visualize the computation graph (I don’t need it as a fancy figure like tensorboard, a text would be OK).

1 Like

Im not sure of a way to visualize pytorch models other than printing it as you mention. I think they are working on implementing it.

However, since pytorch is so flexible it is possible to print() anywhere, which is great for debugging. So simply print the input just before it gets put into the specific layer…

Lets say its an convolutional layer:

`def forward(self, x):
x = self.conv1(x.double())
x = F.softmax(x,dim=1)

    return x`

Often when layers output the same is because they never learn. Your gradients might explode, or the input to the layer might always be zeroes. It can be a result of ReLU() where the input to relu is all negative, making it simply zeroes.

Hope this helps :slight_smile:

@Ditlev_Jorgensen Thanks for that! I’ll try it.