UNet output always scaled down in intensity

For some reason, even with simple cases, the masks I get back using the original UNet model (with padding) do not have the same intensity as the masks it was trained with. The shape of the masks infered by the UNet are very good, but it seems the intensity is always scaled at 80%-90% of the original masks intensity.

In my case, the intensity is pretty important as I want to extract signals mixed with noise. If the intensity is not scaled correctly, I can’t fully extract the signal or fully remove the noise.

I tried 2 approaches : training mix vs signal, and mix vs mask of the signal. Both approaches result in the same conclusion : great shape detection, and great intensity detection, but the intensity is always scaled by some ratio for a reason I don’t know.

The UNet model I’m using is as follow:

Contracting/Expanding Blocks: Conv2d, ReLU, DropOut, Conv2d, DropOut
Downsampling: MaxPool2d
Upsampling: ConvTranspose2d
For reference, here’s my UNet code: https://github.com/divideconcept/PyTorch-libtorch-U-Net/blob/master/unet.h

I apply padding (with zeroes) at each 3x3 Conv2d to avoid cropping the image. Could it because of that ?

My mix image values are always between [0 1], and so is the target signal image.


no elaborate answer but just few thoughts and questions:

To understand your dataset better, how many classes do you have in your mask images?

Did you already check that your masks are loaded correctly by displaying them? Make sure the mask images from your data loader resemble the mask images from your database on your PC. Could normalization be an issue?

What loss function do you use?

Can you post a code snippet for getting the prediction mask image?

Here are my inputs (left), the inference from the model (middle) and the response I was waiting (right).
As you can see the shape is quite good, but the intensity is slightly off by 10-20% compared to the target image.
Yes I checked the source, inference and targets multiple times.
FYI I don’t do any normalization steps, but the values are always between 0 and 1.
Here’s my UNet code: (pytorch c++): https://github.com/divideconcept/PyTorch-libtorch-U-Net/blob/master/unet.h

My training loop is as follow (pytorch c++) :

result = model->forward(source[b]);
loss = torch::mse_loss(result, target[b]);

Breaking News : removing the DropOut nodes solved the intensity problem, and changing the loss function from MSE(L2) to MAE(L1) helped a lot with the accuracy of the lower values.

1 Like