Semantic segmentation inference of pixels with ignore_index

Hello, I am applying segmentation on a dataset with 4 semantic labels and 1 null label 255 which is included in ignore index in loss function. When I test my model and visualize the prediction models seems to be giving values of one of the 4 semantic labels to the ignore index pixels. My questions is, is this random assignment to the pixels or is it applying normal segmentation on them?

I would claim it’s both, as the model would apply the same processing to these pixel values, but as it wasn’t trained on these inputs the results might be “random”.

The null class contains pixels with range above 30 meters using a mask mathematically calculated, which mean that some of the null pixels belong to the main classes but since it is above a certain range it has been nulled. Does that make it not random?

I’m not sure, as I don’t fully understand the use case. Could you give me more details about why these pixels have to be “nulled” and why you are ignoring them in the loss calculation?

I am using dataset that has mars images taken by rovers, the semantic labels has some unlabeled pixels due to disagreement of the experts annotating data which are included in the null, and there was a method used for images to determine range above 30m which was masked and included in the null, so these, so lets say i add them to ignore label, still the pic have one of the classes that are included in training for the ignored pixels so the models is gonna predict them normally, but can i depend that it is predicting them applying the trained parameters or does it work randomly. I hope it is clearer, and thanks a lot btw!

The model will process all inputs using its trained parameters.
So e.g. in case you are using a CNN-like model, the conv kernels will also be applied to the pixels which would be later ignored in the loss calculation.
from this point of view the predictions will thus not be random and the model will output logits for all pixels using its parameters.

However, the model was never trained to predict these pixels. If these pixels now have a completely different value range or differ in their stats in any other way, one could claim that this input data is “out of bounds” or from another domain. The interpretation of these pixel predictions thus depends on your use case and how you would like to treat it.

1 Like