From Conv2d->MaxPool2d to Linear in_features

Hi,
I have an image dataset, each image size of 28 * 28 * 3. I am passing it through two Conv2d/MaxPool2d layer as follows

    self.nn = nn.Sequential(
        nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=1),#28*28*32
        nn.BatchNorm2d(32),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2), #14*14*32

        nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3), #12*12*64
        nn.BatchNorm2d(64),
        nn.ReLU(),
        nn.MaxPool2d(2), #10*10*64

        nn.Flatten(),
        nn.Linear(in_features=???) 

I am stuck at what value I need to pass the in_features. If I pass 6400, I get dimension error.
RuntimeError: mat1 dim 1 must match mat2 dim 0

Can someone please help me understand where I am wrong?

Thank you!

flatten converst the whole tensor into a vector. Linear requires a batch of vectors
You need to do tensor.reshape(-1,101064) in the forward

Since I am calling it in nn.Sequential, it doesn’t allow me to add nn.reshape(-1,1010 64) after nn.Flatten().I get AttributeError: module ‘torch.nn’ has no attribute ‘reshape’

You have to call in in the forward and remove the flatten.
soo if you have a sequence like:

x=self.layer(x)
x=self.layer2(x)
# add here
x = x.reshape(-1,10 * 10 * 64)

Thank you. What I am unable to understand is from my calculation, I get 6400 (64 * 10 * 10), for the input features for the linear call, but the number of input features that works fine is 2304, instead of 6400. In that case the Flatten and the nn.linear work perfectly fine.

nn.Linear(in_features=2304, out_features=512), => Can you please help me understand how the input features are 2304, (instead of 6400)?

Following is the steps of Conv2d and max2d done (twice) before passing to the linear.

    nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=1),#28*28*32
    nn.BatchNorm2d(32),
    nn.ReLU(),
    nn.MaxPool2d(kernel_size=2, stride=2), #14*14*32

    nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3), #12*12*64
    nn.BatchNorm2d(64),
    nn.ReLU(),
    nn.MaxPool2d(2), #10*10*64 I get 6400, but the linear works for 2304.
    nn.Linear(in_features=2304, out_features=512)

Finally understood where I went wrong, just declaring nn.MaxPool2d(2) takes the kernel size as well as the stride as 2. I was expecting it to take the stride as 1 by default. So, in that case, the output size from the Max2d becomes 66. So 66*64 becomes 2304.