Understanding Linear layer batch size

Hello,
I have been struggling with determining how the batching of the Dataloader works with nn.Module. As far as I understand any last 2 dimensions of the given tensor are accepted as an input however I cannot see this to be the case on the following piece of code:

class FNet(nn.Module):
    def __init__(self, **kwargs): 
        super(FNet, self).__init__()
        self.fc1        = nn.Linear(32678, 1024)
        self.fc2        = nn.Linear(1024, 256)
        self.fc3        = nn.Linear(256, 1)
     
    def forward(self, X): 
        X = F.relu( self.fc1( F.relu(X) ) )
        X = F.relu( self.fc2(X) )
        
        return F.sigmoid( self.fc1(X) )
train_dataloader = DataLoader(train_dataset, batch_size=train_batch, shuffle=True)

model.train()
for batch_idx, (data, labels) in enumerate(loader):
        optimizer.zero_grad()
        print(data.shape, data.dtype)
        data = data.to(device)
        labels = labels.to(device)
        
        # Forward pass: Compute predicted y by passing x to the model
        data = data.reshape(train_batch, 1, 32768).float()
        labels_pred = model(data)

        loss = criterion(labels_pred, labels.long())
        loss.backward()
        optimizer.step()

On the code given above, I try to resize the input to get a size of (batch_size, 1, 32768) so that 1x32768 should be gotten as input dimensions. However I get the following error:

RuntimeError: size mismatch, m1: [4 x 32768], m2: [32678 x 1024]

I will be grateful for any response. Thank you.

Your input in nn.Linear is 32678 but you reshape data into `32768’. You replaced 7 with 6 in your code.

Thank you so much, didn’t see that error