How to deal with nan in the target images

I am trying to train a Unet. The inputs tensors is complete (withoout nan values). However, the target tensors contain numberous nan values, which is due to the detection methods, e.g., the surface properties masked by the clouds. How can I manage to train the Unet with these masked target images?

Hi fdzoom!

You don’t say much about your use case or what loss criterion you are using.

But, for example, if you were performing multi-class semantic segmentation
(something for which U-Net could be appropriate), CrossEntropyLoss
would be a common choice of loss criterion.

In this case you could pre-process your targets, replacing the nans with a
non-nan sentinel value, and then use that value for CrossEntropyLoss’s
ignore_index constructor argument. (It would be convenient to use -100
for this as -100 is ignore_index’s default value.)

Best.

K. Frank

Thanks for the reply.
I’m dealing with a regression problem, using the input tensors (5, 256, 256) to predict the target image (256, 256). The values of target images varies from 0 to 1 and nan values are very common. Therefore, the current used loss criterion is MSEloss. From your proposal, I wonder if I can replace the nans with a non-nan values and use a masked MSEloss?

Hi fdzoom!

Yes, that would seem like a sensible thing to try.

I’m not aware of pytorch having a built-in masked mse-loss, but it would be
completely straightforward to write your own. Or your could use MSELoss
with reduction = 'none' (after replacing your nans with some placeholder
value) and then perform your own masked average over the unreduced
result.

Best.

K. Frank