Unet pixel-wise weighted loss function

Hi, did anyone successfully implemented the pixel-wise weighted loss function that was described in the Unet paper?
Thanks

2 Likes

Hi,
Yes of course.
Here is some link to implementation of Github community. They have trained their model and used, so they have to be correct.
link1: https://github.com/milesial/Pytorch-UNet
link2: https://github.com/usuyama/pytorch-unet
link3: https://github.com/jvanvugt/pytorch-unet
link4: https://www.kaggle.com/witwitchayakarn/u-net-with-pytorch

If you did not understand any part or you have other questions, feel free to ask.

1 Like

Hi Nikronic, Thanks for the links!
However, None of these Unet implementation are using the pixel-weighted soft-max cross-entropy loss that is defined in the Unet paper (page 5).

I’ve tried to implement it myself using a modified version of this code to compute the weights which I multiply by the CrossEntropyLoss:

loss = nn.CrossEntropyLoss(reduction='none')(output, target)
loss = torch.mean(loss * weights)

where the output is the output tensor from the Unet.

When I am using this implementation my model is a complete failure (F1 score = 0).

So, I’ve replace the CrossEntropyLoss by:

m = nn.Sigmoid()
loss_per_pixel = torch.nn.BCELoss(reduce=False)(m(input), target) * weights

In this case my model converge to a relatively good result but this result is less accurate than if I’ve used a non-weighted loss function.

So, I am wondering if I am doing something wrong.

Thanks for your help!

1 Like

I’ve implemented a weighted pixelwise NLLLoss a while ago in this topic. The code is quite old (e.g. don’t use Variables anymore), but might be a good starter for your custom loss.

3 Likes

Hey,
this is an old thread but I believe Sam’s formulation is right:

I’ve used this loss myself in the past and it worked out fairly well, assuming the scale of the weights is
“correct”. Make sure to pair it with something like a dice loss so that the CE can be used to incentivize precision around the edges.