Hi, the linear layer requires you to specify how many channels do you need as input and as output.
In your case the convolutional layers are wrong.
You need to set self.conv1= nn.Conv2d(in_channels,out_channels,kernel_size)
Therefore the output of conv1 will have 5 channels which mismatch the input of conv2.
The fully connected layer (or linear layer)
requires as input Linear(in_channels,out_channels)
In channels should match conv2’s out channels
Thanks for the response, how would you interpret the first two layers based on the top picture? Is 64 the input and output size? is 5x5 the kernal size?
My guess so far would be:
Well, from the picture I would say the kernel size is 5x5.
The rest is not really defined. The 2nd convolution should be 64,128 unless they concatanate anything.
The first one should be something to 64