FCN Implementation : Loss Function

I am trying to implement a fully convolutional network for semantic segmentation on the Pascal VOC dataset.
I am using the NLL Loss 2D but this throws up an error since it requires the target to be a long tensor but I have image targets.
How to circumvent this ? Is there any other loss function that I can make use of ?

TypeError: FloatSpatialClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor, bool, NoneType, torch.FloatTensor, int), but expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight, int ignore_index)

1 Like

I assume your target is an image with the class index at each pixel.
Try to cast it to a LongTensor, before calculating the loss.

Here is a simple example:

x = Variable(torch.FloatTensor(1, 10, 10, 10).random_())
y = Variable(torch.FloatTensor(1, 10, 10).random_(0, 10))

criterion = nn.NLLLoss2d()

loss = criterion(F.log_softmax(x), y.long())

You could of course just try to load your target as a long array beforehand. :wink:


The NLLLoss2d has been deprecated right? How could we redo that?

nn.NLLLoss now takes multi-dim input. Have a look at the docs for the shape information.