I would like to know why I get this error using
CrossEntropyLoss() for the semantic segmentation task.
Inputs have shape
[B, C, W, H], and targets have shape
[B, W, H].
Target is not one-hot encoded.
3012 if size_average is not None or reduce is not None:
3013 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3014 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: 0D or 1D target tensor expected, multi-target not supported
I guess your model output has another shape than reported here, as your shapes would work:
B, C, H, W = 2, 3, 4, 4
output = torch.randn(B, C, H, W, requires_grad=True)
target = torch.randint(0, C-1, (B, H, W))
criterion = nn.CrossEntropyLoss()
loss = criterion(output, target) # works
output = torch.randn(B, C)
loss = criterion(output, target)
# RuntimeError: 0D or 1D target tensor expected, multi-target not supported
I double-checked the model output and it has shape [B, C, H, W], in my case [2, 3, 256, 256].
The “target” instead previously had shape [B, C, H, W], after I applied
torch.squeeze(dim=1), it has shape [B, H, W] → [2, 256, 256]
I still do not understand why I get that kind of error