Error facing : Assertion `cur_target >= 0 && cur_target < n_classes' failed

Hi guys, I am trying to train my data with this model NN I have 7 inputs (features) with 1 output (label) I am trying to classifying the data and train and calculating the loss. I did not understand why this error exactly why its not matching!?

This is my NN model. 7 input(features) and 1 output (labels) hidden lyres are deffrent.

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(7, 100)
        self.fc2 = nn.Linear(100, 80)
        self.fc3 = nn.Linear(80, 50)
        self.fc4 = nn.Linear(50, 30)
        self.fc5 = nn.Linear(30, 10)
        self.fc6 = nn.Linear(10, 1)

    def forward(self, x):
        x = self.fc1(x).clamp(min=0)
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = F.relu(self.fc4(x))
        x = F.relu(self.fc5(x))
        x = self.fc6(x)
        return x
net = Net()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
for epoch in range(3):
    iterations = 0
    running_loss = 0
    for i,(inputs,labels) in enumerate(train_loader):
        
        iterations+=1
        
        inputs = inputs.float()
        labels = labels.long()
        
        # Feed Forward
        output = net(inputs)
        # Loss Calculation
        loss = criterion(output, labels)

    
        running_loss = running_loss + loss.item()
        _, prd = torch.max(output, dim = 1)
       
        accuracy = (prd == labels).float().mean()
        #accuracy = (labels).float().mean()
        
        # Clear the gradient buffer (we don't want to accumulate gradients)
        optimizer.zero_grad()
        # Backpropagation 
        loss.backward()
        # Weight Update: w <-- w - lr * gradient
        optimizer.step()
        #print("Epoch [{}][{}/{}], Loss: {:.3f}".format(epoch, i, len(train_loader), running_loss / iterations))
        print("Epoch [{}][{}/{}], Loss: {:.3f}".format(epoch ,i , len(train_loader), running_loss))

So what i am getting an error is is telling that the either target labels or u out labels have different indices range.

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-38-7ccf84c3537e> in <module>
     15         output = net(inputs)
     16         # Loss Calculation
---> 17         loss = criterion(output, labels)
     18 
     19 

~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    545             result = self._slow_forward(*input, **kwargs)
    546         else:
--> 547             result = self.forward(*input, **kwargs)
    548         for hook in self._forward_hooks.values():
    549             hook_result = hook(self, input, result)

~\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
    914     def forward(self, input, target):
    915         return F.cross_entropy(input, target, weight=self.weight,
--> 916                                ignore_index=self.ignore_index, reduction=self.reduction)
    917 
    918 

~\Anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   1993     if size_average is not None or reduce is not None:
   1994         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 1995     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   1996 
   1997 

~\Anaconda3\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   1822                          .format(input.size(0), target.size(0)))
   1823     if dim == 2:
-> 1824         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
   1825     elif dim == 4:
   1826         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed.  at C:\w\1\s\tmp_conda_3.7_055457\conda\conda-bld\pytorch_1565416617654\work\aten\src\THNN/generic/ClassNLLCriterion.c:94

I assume you are working with a binary classification use case?
In that’s the case, your last linear layer should output two logits, as currently you are only dealing with a single class.

Alternatively, you could keep the shape and use nn.BCEWithLogitsLoss.
Make sure to pass the targets as FloatTensors and with the same shape as the model output to this loss function.

1 Like

As a binary classification problem, shouldn’t it be 1 neuron as output layer?

You could chose between two different approaches:

  • treat the binary classification as a multi-class classification with two output units and use nn.CrossEntropyLoss. The target should then contain the class indices ([0, 1] in your case)
  • use a single output unit and use nn.BCEWithLogitsloss

Sorry for not being clear enough before. :wink:

1 Like