RuntimeError: Given input size: (6x1x24). Calculated output size: (6x0x12). Output size is too small

While trying to execute the below code, getting the error which I am not able to understand. Plz suggest

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, kernel_size=3, stride=1, padding=1)
        self.conv2 = nn.Conv2d(6, 16, kernel_size=3, stride=1, padding=1)
        self.dropout = nn.Dropout2d()
        self.fc1 = nn.Linear(256, 64)
        self.fc2 = nn.Linear(64, 1)
        self.hybrid = Hybrid(qiskit.Aer.get_backend('qasm_simulator'), 100, np.pi / 2)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = F.max_pool2d(x, 2)
        x = F.relu(self.conv2(x))
        x = F.max_pool2d(x, 2)
        x = self.dropout(x)
        x = x.view(1, -1)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        x = self.hybrid(x)
        return torch.cat((x, 1 - x), -1)
model = Net()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_func = nn.NLLLoss()

epochs = 20
loss_list = []

model.train()
for epoch in range(2):
    
    for i, data in enumerate(train_ldr, 0):
        # get the inputs; data is a list of [inputs, labels]
      X_train,Y_train = data.values()
  
      X_train = X_train.unsqueeze(0)
      X_train = X_train.unsqueeze(1)
      Y_train = Y_train.unsqueeze(1)
      optimizer.zero_grad()
        # Forward pass
      print(X_train)  
      output = model(X_train)
      print(output)
        # Calculating loss
      loss = loss_func(output, Y_train)
        # Backward pass
      loss.backward()
        # Optimize the weights
      optimizer.step()
        
      total_loss.append(loss.item())
    loss_list.append(sum(total_loss)/len(total_loss))
    print('Training [{:.0f}%]\tLoss: {:.4f}'.format(
        100. * (epoch + 1) / epochs, loss_list[-1]))

Output and error are:

tensor([[[[ 8.2135e+02,  8.2617e+02,  7.4933e+01,  1.2907e+01,  1.2907e+01,
            5.8055e+00,  7.3675e+01,  5.4667e+00,  0.0000e+00,  9.1295e+00,
            1.0558e+02, -2.8343e-01, -5.7785e-02,  1.0662e-04,  1.1515e-04,
            1.5495e-02,  7.5414e-03,  7.5414e-03,  2.0546e+00, -2.8343e-01,
           -5.7785e-02,  1.2116e+03,  6.1832e+01,  7.3159e+02]]]])
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-24-922f607d5b14> in <module>
     21         # Forward pass
     22       print(X_train)
---> 23       output = model(X_train)
     24       print(output)
     25         # Calculating loss

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

<ipython-input-18-f2f887eb4a09> in forward(self, x)
     11     def forward(self, x):
     12         x = F.relu(self.conv1(x))
---> 13         x = F.max_pool2d(x, 2)
     14         x = F.relu(self.conv2(x))
     15 #        x = F.max_pool2d(x, 2)

~\anaconda3\lib\site-packages\torch\_jit_internal.py in fn(*args, **kwargs)
    363             return if_true(*args, **kwargs)
    364         else:
--> 365             return if_false(*args, **kwargs)
    366 
    367     if if_true.__doc__ is None and if_false.__doc__ is not None:

~\anaconda3\lib\site-packages\torch\nn\functional.py in _max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode, return_indices)
    657     if stride is None:
    658         stride = torch.jit.annotate(List[int], [])
--> 659     return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
    660 
    661 

RuntimeError: Given input size: (6x1x24). Calculated output size: (6x0x12). Output size is too small

The used input tensor is too small in its spatial size, so that the pooling layer would create an empty tensor.
You would either have to increase the spatial size of the tensor or change the model architecture by e.g. removing some pooling layers.

Another error that pops up after removing the pooling layers is:

IndexError: dimension specified as 0 but tensor has no dimensions

Kindly suggest what is going wrong. Is it the data I have uploaded?

IndexError                                Traceback (most recent call last)
<ipython-input-10-86dbc70a4dd9> in <module>
     25       print(output)
     26         # Calculating loss
---> 27       loss = loss_func(output, Y_train)
     28         # Backward pass
     29       loss.backward()

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

~\anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
    214     def forward(self, input: Tensor, target: Tensor) -> Tensor:
    215         assert self.weight is None or isinstance(self.weight, Tensor)
--> 216         return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
    217 
    218 

~\anaconda3\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   2381         raise ValueError("Expected 2 or more dimensions (got {})".format(dim))
   2382 
-> 2383     if input.size(0) != target.size(0):
   2384         raise ValueError(
   2385             "Expected input batch_size ({}) to match target batch_size ({}).".format(input.size(0), target.size(0))

IndexError: dimension specified as 0 but tensor has no dimensions

Either the model output or the target tensor seem to be a scalar tensor (0-dim tensor), so you would have to make sure the model output has the shape [batch_size, nb_classes], while the target has the shape [batch_size] and contains the class indices in [0, nb_classes-1].

In the above program, I have used:=

batch size = 20, i.e torch.size[20, 24] last one is [19,24]
in_features = 480
out_features = 240
out_classes = 2 
I have kept Kernel_size = 5, 

Being a beginner, not sure what to set input and output parameters in nn.linear() modules.
I have removed the pooling layers. Everytime I make a change, getting the error

RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x3072 and 7680x240)

This shape mismatch error might be raised by a linear layer, which expects another in_features shape.
I’m currently unsure which layer it would be so you would need to check for in_features=7680 in your model definition and change it to 3072, as this seems to be the number of features used in the activation.