Pytorch Error IndexError; dimension specified as 0 but tensor has no dimensions

Hello!
I have dataset where: data 23761 x 13 , labels 23761 x 1 loaded by DataLoader
My net:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(13, 13)
        self.fc2 = nn.Linear(13, 2)

    def forward(self,x):
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return F.logsigmoid(x)

net = Net()
criterion = nn.CrossEntropyLoss()

carset = carDataset()
train_loader = torch.utils.data.DataLoader(carset, batch_size = 10, shuffle = True )

for epoch in range(epochs):
   running_loss = 0
   for features, labels in train_loader:
       output = net(features)
       labels = labels.squeeze()
       loss = criterion(output, labels)
       optimizer.zero_grad()
       loss.backward()
       optimizer.step()

       running_loss += loss.item()

   print(loss.item())

When I put batch_size = 10 or batch_size = 20 it throws:

Traceback (most recent call last):
  File "nn.py", line 75, in <module>
    loss = criterion(output, labels)
 
    if input.size(0) != target.size(0):
IndexError: dimension specified as 0 but tensor has no dimensions

When I make in 64 it works normally and I have results.

My labels have 2 classes: 0 or 1.
Whats wrong with it ?

Hi @Dmitriy. The reason it fails is because the last batch of your epoch will be of size 1 when using batch_size = 10 or batch_size = 20, as your dataset length is 23761. Because of this, labels.size() = torch.Size([1, 1]) will result in a size of torch.Size([]) after

giving the error you mention. To solve this replace the line by labels = labels.squeeze(1) to avoid squeezing the batch dimension, or set drop_last=True in your Dataloader.

Note moreover that nn.CrossEntropyLoss() already performs nn.LogSoftmax() (see CrossEntropyLoss — PyTorch 2.1 documentation). Doublecheck if

is desired. Hope this helps!

Well, it is going to be logical!
Thank you for your supporting. :slightly_smiling_face:

P.S. About nn.CrossEntropyLoss() :I am a newbie in PyTorch and not really researching all functions right.
It seems I need use criterion = nn.NLLLoss() for reducing excess work ?

I believe using F.logsigmoid(x) might be undesired as your (log) probabilities will be independent for both classes (one class might have probability 0.5 and the other 0.7). Your model might still train well, but the outputted probabilities will not add up to one. For simplicity, I suggest to use

return x

instead of

return F.logsigmoid(x)

and use nn.CrossEntropyLoss() which will take care of both nn.LogSoftmax() and nn.NLLLoss(). Using nn.LogSoftmax() will make sure your probabilities across classes add up to one.