Weight tensor should be defined either for all or no classes

Getting error as :

RuntimeError                              Traceback (most recent call last)
<ipython-input-42-cafff645f983> in <module>()
     13         print("targets[:, 0] size ==> ",len(targets[:, 0]))
     14 
---> 15         loss = criterion(outputs, targets[:, 0])
     16         loss.backward()
     17         optimizer.step()

/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

<ipython-input-1-992294f0aa44> in forward(self, outputs, targets)
     11 
     12     def forward(self, outputs, targets):
---> 13         return self.loss(F.log_softmax(outputs), targets)

/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
    145         _assert_no_grad(target)
    146         return F.nll_loss(input, target, self.weight, self.size_average,
--> 147                           self.ignore_index, self.reduce)
    148 
    149 

/opt/anaconda/lib/python3.6/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce)
   1049         return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce)
   1050     elif dim == 4:
-> 1051         return torch._C._nn.nll_loss2d(input, target, weight, size_average, ignore_index, reduce)
   1052     else:
   1053         raise ValueError('Expected 2 or 4 dimensions (got {})'.format(dim))

RuntimeError: weight tensor should be defined either for all or no classes at /opt/conda/conda-bld/pytorch_1513368888240/work/torch/lib/THNN/generic/SpatialClassNLLCriterion.c:60

Here is my code :

weight = torch.ones(22)

criterion = CrossEntropyLoss2d(weight)

for epoch in range(1, num_epochs+1):
    epoch_loss = []
    iteration=1
    for step, (images, labels) in enumerate(trainLoader):
        print("Iter:"+str(iteration))
        iteration=iteration+1
        inputs = Variable(images)
        targets = Variable(labels)
        
        outputs = model(inputs)
        optimizer.zero_grad()
        print("outputs size ==> ",len(outputs))
        print("targets[:, 0] size ==> ",len(targets[:, 0]))

        loss = criterion(outputs, targets[:, 0])
        loss.backward()
        optimizer.step()
        epoch_loss.append(loss.data[0])
        
        average = sum(epoch_loss) / len(epoch_loss)
        
        print("loss: "+str(average)+" epoch: "+str(epoch)+", step: "+str(step))

Can you please help me here ?

Thanks in advance

What are the sizes of the inputs to criterion? (ie, what is outputs.size(), targets[:,0].size())?

The error implies that the size of the weights (22) isn’t equal to the number of classes in outputs

1 Like

Hi Richard , thanks a lot for your help.

I solved that one… I was giving class count wrongly.

My actual class count is 2.

So , I changed the weight size to 2.

But , now I am getting different error :


RuntimeError                              Traceback (most recent call last)
<ipython-input-62-a24d68a5b61a> in <module>()
     25 #                 f'target (epoch: {epoch}, step: {step})')
     26 
---> 27         loss = criterion(outputs, targets[:, 0])
     28         loss.backward()
     29         optimizer.step()

/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

<ipython-input-1-62b1bfa509f1> in forward(self, outputs, targets)
     11 
     12     def forward(self, outputs, targets):
---> 13         return self.loss(F.log_softmax(outputs), targets)

/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
    145         _assert_no_grad(target)
    146         return F.nll_loss(input, target, self.weight, self.size_average,
--> 147                           self.ignore_index, self.reduce)
    148 
    149 

/opt/anaconda/lib/python3.6/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce)
   1049         return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce)
   1050     elif dim == 4:
-> 1051         return torch._C._nn.nll_loss2d(input, target, weight, size_average, ignore_index, reduce)
   1052     else:
   1053         raise ValueError('Expected 2 or 4 dimensions (got {})'.format(dim))

RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed.  at /opt/conda/conda-bld/pytorch_1513368888240/work/torch/lib/THNN/generic/SpatialClassNLLCriterion.c:111

Can you please help ?

Thanks in advance

Now it seems, that your target has some illegal values.
It’s either negative or has values < n_classes.
Could you check this?
If you have two classes, your target should take values [0, 1].

Hi, am having the same problem… and my target has 3 classes, 0 and 1 and 255 which is the ignore index. I defined my weights as [0.1,0.9] for 0 and 1, but it didnt work… I suppose it s because of the ignore index 255, but isnt it gonna be ignored anyhow?
How I can work around this? Thanks :wink:

This code snippet seems to work:

criterion = nn.CrossEntropyLoss(
    ignore_index=255, weight=torch.tensor([1., 2.]), reduction='none')

x = torch.randn(3, 2)
y = torch.tensor([0, 1, 255])

loss = criterion(x, y)
print(loss)
> tensor([0.8519, 1.1084, 0.0000])
2 Likes

Thanks. But I have one further question.
I have target with shape [n, w, h] which n is the batch size; output with shape [n, c, w, h]; and my mask is binary mask with ignore index 255.

I tried with your script and find it only works when the output channel dimension is 2, namely output shape [n,2,w,h] , one channel for 0, and one channel for 1.

It kinda makes sense coz the weight dimension should match the class dimension in the output, but just since it is a binary class problem, I wanna save some memory to output just 1 channel with 0 and 1 in the same channel. output [n,1,w,h]

Then to use the weights, do I stil lhave to reshape it into 2 channel? or am I doing something wrong here.
Thanks :wink:

For a binary classification you could either use your setup with two output channels and nn.CrossEntropyLoss, or alternatively you could output a single channel and use nn.BCEWithLogitsLoss.
For former is used for a multi-class classification, while the latter approach is used for a binary or multi-label classification.

1 Like

Hi, Thanks for reply…

Yes, am aware of that. am not using BCE loss, am using my own defined focalloss,which I used NLLLOSS inside.

class FocalLossnd(nn.Module):
    def __init__(self, weights=None, gamma=0, reduction='mean', ignore_idx=255):
        super().__init__()
        self.weights = weights
        self.gamma = gamma
        self.reduction= reduction
        self.eps = 1e-6
        self.ignore_idx = ignore_idx

    def forward(self, pred, target):
        target = target.type(torch.cuda.LongTensor)
        pt = torch.softmax(pred, dim=1)
        focal_weights = torch.pow((torch.ones(1).to(pred.device) - pt), self.gamma)
        focal = focal_weights * torch.log(pt + self.eps)

        criterion = nn.NLLLoss(
            weight=self.weights,
            ignore_index=self.ignore_idx,
            reduction=self.reduction)
        loss = criterion(focal, target)

        return loss