I am doing a segmentation project with a Unet. I have an unbalanced dataset with 2 class and I want to apply, as a first step, a weight for each class. I use the loss torch.nn.BCELoss(). After looking on internet, it seems that people that had a similar problem were advised to switch to BCEWithLogitsLoss() which has a pos_weight argument to choose class weight.
I would prefer if possible to keep using torch.nn.BCELoss() because, from what I understood, if I use BCEWithLogitsLoss() I would have to remove the sigmoid layer at the end of my network which means that in every single place where I make a prediction with my network with code like pred = model(data) I would then need to add a sigmoid to my prediction to get the same output as before. This solution doesn’t seem great as there are many places in my codes where I use pred = model(data).
Have I understood my problem clearly ? Is there any way to keep using BCELoss() ?
I believe you should be able to manually weight the unreduced loss, if you are using binary targets. If that’s not the case, you would need to use nn.BCEWithLogitsLoss with the pos_weight argument. This issue explains the use case a bit more and this code snippet shows the results: