Getting error for output of linear layer

I want to get the output of the first linear layer and defined a model like this:

model = CatAndDogConvNet()
CatAndDogConvNet(
  (conv1): Conv2d(3, 16, kernel_size=(5, 5), stride=(2, 2), padding=(1, 1))
  (maxpool_1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (relu_1): ReLU()
  (conv2): Conv2d(16, 32, kernel_size=(5, 5), stride=(2, 2), padding=(1, 1))
  (maxpool_2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (relu_2): ReLU()
  (conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (maxpool_3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (relu_3): ReLU()
  (fc1): Linear(in_features=2304, out_features=500, bias=True)
  (fc2): Linear(in_features=500, out_features=50, bias=True)
  (fc3): Linear(in_features=50, out_features=2, bias=True)
)

And a new model like this

model_new = torch.nn.Sequential(*list(model.children())[:10)
model_new
Sequential(
  (0): Conv2d(3, 16, kernel_size=(5, 5), stride=(2, 2), padding=(1, 1))
  (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (2): Conv2d(16, 32, kernel_size=(5, 5), stride=(2, 2), padding=(1, 1))
  (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (6): Linear(in_features=2304, out_features=500, bias=True)
)

But when I pass my images through it,

res_4 = []
model_new = torch.nn.Sequential(*list(model.children())[:10])
for i in range(len(imgs)):
    temp = model_new(imgs[i][0])
    res_4.append([temp, imgs[i][1]])

I get the following error:

RuntimeError Traceback (most recent call last)
/tmp/ipykernel_95235/516937314.py in
2 model_new = torch.nn.Sequential(*list(model.children())[:7])
3 for i in range(len(imgs)):
----> 4 temp = model_new(imgs[i][0])
5 res_4.append([temp, imgs[i][1]])

~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
→ 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = ,

~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input)
202 def forward(self, input):
203 for module in self:
→ 204 input = module(input)
205 return input
206

~/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):

→ 114 return F.linear(input, self.weight, self.bias)
115
116 def extra_repr(self) → str:

RuntimeError: mat1 and mat2 shapes cannot be multiplied (384x6 and 2304x500)

This is what I am getting for the model when I pass the image

summary(model_new, (1, 3, 224, 224))
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
Sequential                               --                        --
├─Conv2d: 1-1                            [1, 16, 111, 111]         1,216
├─MaxPool2d: 1-2                         [1, 16, 55, 55]           --
├─Conv2d: 1-3                            [1, 32, 27, 27]           12,832
├─MaxPool2d: 1-4                         [1, 32, 13, 13]           --
├─Conv2d: 1-5                            [1, 64, 13, 13]           18,496
├─MaxPool2d: 1-6                         [1, 64, 6, 6]             --
==========================================================================================
Total params: 32,544
Trainable params: 32,544
Non-trainable params: 0
Total mult-adds (M): 27.46
==========================================================================================
Input size (MB): 0.60
Forward/backward pass size (MB): 1.85
Params size (MB): 0.13
Estimated Total Size (MB): 2.58

Your nn.Sequential container is missing the nn.Flatten module between the 2D layers and the first nn.Linear.

But why is it not showing up? I have defined it in the model definition and it’s training as well. I’ve defined the model like this

class CatAndDogConvNet(nn.Module):

def __init__(self):
    super().__init__()

    # onvolutional layers (3,16,32)
    self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 16, kernel_size=(5, 5), stride=2, padding=1)
    self.maxpool = nn.MaxPool2d(2)
    self.conv2 = nn.Conv2d(in_channels = 16, out_channels = 32, kernel_size=(5, 5), stride=2, padding=1)
    self.maxpool = nn.MaxPool2d(2)
    self.conv3 = nn.Conv2d(in_channels = 32, out_channels = 64, kernel_size=(3, 3), padding=1)
    self.maxpool = nn.MaxPool2d(2)

    # conected layers
    self.fc1 = nn.Linear(in_features= 64 * 6 * 6, out_features=500)
    self.fc2 = nn.Linear(in_features=500, out_features=50)
    self.fc3 = nn.Linear(in_features=50, out_features=2)

    self.maxpool = nn.MaxPool2d(2)

def forward(self, X):

    X = self.maxpool(F.relu(self.conv1(X)))
    # print(X.shape)
    X = self.maxpool(F.relu(self.conv2(X)))
    # print(X.shape)
    X = self.maxpool(F.relu(self.conv3(X)))
    # print(X.shape)
    X = X.view(X.shape[0], -1)
    # print(X.shape)
    X = F.relu(self.fc1(X))
    X = F.relu(self.fc2(X))
    X = self.fc3(X)

    return X

I think I found the issue here. In Sequential(), I mentioned Flatten() default which ignores the batch size and flattens everything else. Since in my test set, I am not adding batch size so I was getting 3x224x224 and after Flatten(), it considered 64 as batch size, which is why it was giving the error. So I changed it to Flatten(0,2) and it worked.