RuntimeError: Given input size: (32x20x1). Calculated output size: (32x10x0). Output size is too small

While running my script below, I get the error message, indicating a mismatch in the spatial structure of my CNN network during training. Mentioning an incoherence of the MaxPool at the activation function of the second CNN (conv2). Here is the shape of my tensor (torch.Size([157056, 40, 2])), which indicates the shape of the collected audio data after transformation.
How to solve the problem? Need your help. Thanks
Here is my:
in_features = 576
out_features = 200
last_feature = 10

class CNNLetNet(nn.ModuleList):
    def __init__(self):
        super(CNNLetNet, self).__init__()
        """
         @ defining four cnn network to perform our data 
         in a set of sequential layers
        """
        self.conv1 = nn.Sequential(
            nn.Conv2d(1, 16, 3, stride=1, padding=1)
        )
        self.conv2 = nn.Sequential(
            nn.Conv2d(16, 32, 3, stride=1, padding=1)
        )
        self.conv3 = nn.Sequential(
            nn.Conv2d(32, 64, 3, stride=1, padding=1)
        )
        self.fc1 = nn.Sequential(
            nn.Linear(64 * 3 * 3, 200)
        )
        self.fc2 = nn.Sequential(
            nn.Linear(200, 60)
        )
        self.fc3 = nn.Sequential(
            nn.Linear(60, 10)
        )
        self.pool = nn.MaxPool2d(2, 2)

    def forward(self, input_data):
        input_data = self.pool(F.relu(self.conv1(input_data)))
        input_data = self.pool(F.relu(self.conv2(input_data)))
        input_data = self.pool(F.relu(self.conv3(input_data)))
        input_data = torch.flatten(input_data, 1)
        input_data = F.relu(self.fc1(input_data))
        input_data = F.relu(self.fc2(input_data))
        input_data = self.fc3(input_data)
        return input_data

The spatial size of your input is too small for the model architecture and one of the pooling layers would create an empty tensor.
Remove some pooling layers or increase the spatial size of the input.

Thanks, Mr. Now I have reshaped my data and reduced the size number. The shape is now torch.Size([30, 100, 100]) with 30 matrix of 100*100. The error raises the following message:

self.conv1 = nn.Sequential(
            nn.Conv2d(1, 16, 3, stride=1, padding=1)
        )
        self.conv2 = nn.Sequential(
            nn.Conv2d(16, 32, 3, stride=1, padding=1)
        )
        self.conv3 = nn.Sequential(
            nn.Conv2d(32, 64, 3, stride=1, padding=1)
        )
        self.fc1 = nn.Sequential(
            nn.Linear(64 * 3 * 3, 250)
        )
        self.fc2 = nn.Sequential(
            nn.Linear(250, 150)
        )
        self.fc3 = nn.Sequential(
            nn.Linear(150, 80)
        )
        self.fc4 = nn.Sequential(
            nn.Linear(80, 10)
        )
        self.pool = nn.Sequential(
            nn.MaxPool2d(2, 2)
        )

    def forward(self, input_data):
        input_data = self.pool(F.relu(self.conv1(input_data)))
        input_data = self.pool(F.relu(self.conv2(input_data)))
        input_data = self.pool(F.relu(self.conv3(input_data)))
        input_data = torch.flatten(input_data, 1)
        input_data = F.relu(self.fc1(input_data))
        input_data = F.relu(self.fc2(input_data))
        input_data = F.relu(self.fc3(input_data))
        input_data = self.fc4(input_data)
        return input_data

The in_features in self.fc1 are set to a wrong value (64 * 3 * 3 instead of 9216 for this input shape).

So, how can I modify it? Need your suggestions. Thanks

Change the in_features to 9216.

Thank you for your help.

Another issue raising. While training the model, it seems the data to be a tuple not a tensor. During preprocessing I have convert the numpy() into tensor by signal = transforms.ToTensor()(np.array(signal)).float(), this seems to be correct right? So, how can I make it works:

     for epoch in range(epochs):
        model.train()
        running_loss = 0.0
        for i, data in enumerate(trainloader, 0):
            target, labels = data

            outputs, input_data = model(target)
            loss = criterion(outputs, labels)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

            running_loss += loss.item()
            if i % 100 == 0:
               print(f"nEpochs : {epoch + 1} | nSteps : {i + 1}/ {ntotal_len} | nLoss : {running_loss / 100}")
            print("Finished training")

Check why labels is a tuple and try to pass it as a tensor to the criterion instead.
I don’t know why it’s a tuple currently, so cannot suggest a proper solution.

Thank you Sr. Solved