About nn.Linear(16*5*5,120)

Hi, I’m just getting started with the pytorch.Preformatted text
In this example, I don’t understand why the input of the full connected layer is the output latitude of the previous convolution multiplied by the size of the convolution kernel?
This is my code

class net(nn.Module):
    def __init__(self):
        self.cov1 = nn.Conv2d(1,6,5)#1表示图像为单通道  6表示输出的时候是6个通道  5表示卷积核5*5
        self.cov2 = nn.Conv2d(6,16,5)
        self.fc1 = nn.Linear(16*5*5,120)
        self.fc2 = nn.Linear(120,81)
        self.fc3 = nn.Linear(81,10)

The 5*5 are not derived from the kernel size, but the spatial size of the input to the linear layer, which is apparently also 5*5.
You can add a print statement right before feeding the activation to self.fc1 to see the activation shape, which should be [batch_size, 16, 5, 5] before flattening.

thank you very much!!!

This might help