Different type of weighting with NLLLoss2d

I am doing image segmentation, and I am using torch.nn.NLLLoss2d
I see that there is an option to put weight for each class in this loss.
But I want a different type weighting while calculating loss.
torch.nn.NLLLoss2d calculates the Cross-entropy loss for each pixel and takes average over all of them. But instead of taking average over all pixels, I want to weight each pixel with a predetermined fixed weight(I have this set of weights as a 224x224 matrix, one value for each pixel, [batch_size, (no of classes), 224,224] is the dimension of the final output blob of my CNN). How can I do this in pytorch?
Thank you for you time!

You can use the reduce=False option. This will make it so that you get a loss per pixel. You can do something like the following:

loss = nn.NLLLoss2d(reduce=False)
# x has size (N, C, 224, 224)
out = loss(x, t)
# out has size (N, 224, 224). Your weights have size (224, 224)
result = (out * weights).sum()

@richard s solution seems nice and clean. Some time ago I implemented it myself here. Maybe it’s still usefull. :wink:


Thank you guys! This works!