Hi, I’m a little stuck with the CrossEntropyLoss,
I have a dataset with 500 Images all pixelwise labeled for Semantic Segmentation. The Dataset contains 5 classes, now the problem is that one class covers about 84% of all pixels.
That’s why I wanted to use the CrossEntropyLoss, to weight the other classes higher.
I’m stuck with this error:
Traceback (most recent call last):
File "C:\Users\lukas\anaconda3\envs\Python\lib\site-packages\segmentation_models_pytorch\utils\train.py", line 47, in run
loss, y_pred = self.batch_update(x, y)
File "C:\Users\lukas\anaconda3\envs\Python\lib\site-packages\segmentation_models_pytorch\utils\train.py", line 104, in batch_update
loss = self.loss(prediction, y.long())
File "C:\Users\lukas\anaconda3\envs\Python\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\lukas\anaconda3\envs\Python\lib\site-packages\torch\nn\modules\loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "C:\Users\lukas\anaconda3\envs\Python\lib\site-packages\torch\nn\functional.py", line 2021, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "C:\Users\lukas\anaconda3\envs\Python\lib\site-packages\torch\nn\functional.py", line 1840, in nll_loss
ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: 1only batches of spatial targets supported (non-empty 3D tensors) but got targets of size: : [2, 5, 928, 928]
Process finished with exit code 1
both tensors, input and target have the same size of [2, 5, 928, 928] with Batch_size = 2, number_classes = 5 and the image Size [928, 928]
any suggestions what I’m doing wrong ? I also tried the target as a not one-hot encoded tensor but ended in another error …