Penalize low density regions on an image

Hi, I am using a UNET model which has image and a mask for each pixel using a BCELoss. The model for some reason seems to be miss-predicting the low-density regions as 1 which is exactly opposite of what I want to do. The mask 1 corresponds to high intensity regions rather than low intensity. I have three parts in my image, 1 would be the normal regions, 2 would be the high intensity regions which I want my model to detect and 3 is the low intensity regions which are 0s in the original image which were initially null values but I replaced it with 0. Is there a way to have a threshold detector and then add another model for my mask detection? Any suggestions on how I can do that?

You could use a weighted loss increasing the loss for the classes you want to penalize.
I’m not familiar with your use case but it also seems you are working on a multi-label segmentation use case, since you are using nn.BCELoss? In such a case each pixel can belong to zero, one, or multiple classes. Is this the case here or are you rather working on a multi-class classification where each pixel belongs to a single class, in which case nn.CrossEntropyLoss would be the right criterion.

Hi thank you for your response. Are you suggesting to use CrossEntropyLoss? My model and training steps are same as in the following link: UNET model on GPU runtime error
The issue is for each image patch that I have a very small portion of it belongs to mask pixel class of 1. My ground truth only has 2-pixel values. Either 1 or zero.

I am not sure how to implement a weighted loss increasing it for the loss class of 0. Can you suggest some methods?

It depends on the use case, and in particular if you are working on a multi-label or multi-class classification, as explained in my previous post.

nn.BCEWithLogitsLoss as well as nn.CrossEntropyLoss provide the pos_weight or weight argument, respectively, which can be used to increase the weight of e.g. class 0.

1 Like

Thanks for the suggestion. I tried something on the lines of

pos_weight = torch.ones([2])
loss_fn = nn.BCEWithLogitsLoss(pos_weight=pos_weight)

But that didn’t work well as I got an error RuntimeError: The size of tensor a (2) must match the size of tensor b (256) at non-singleton dimension 3
My input image is of the size 256256 and I have a custom mask for each image so my mask is again 256256 being 1 or 0 at each pixel with a batch size of 10 and I am training my model to take the input image and predict 0 or 1 for each of the input pixel. What would be a good way to code the same?