Class imbalance in pixel classification

Hi,

Dealing with a dataset with pixel-level annotations, say A1, A2, A3 and background pixels.
There is huge class imbalance between A’s and the bg pixels. How is one supposed to tackle this?

For training you will have to convert pixels to labels for cross-entropy loss. You can use bincount to count occurrence of each type of pixel and then use weighted cross-entropy loss.

Thanks, did exactly the same.
Is there a good resource on choosing the perfect weights? I have been using 1/frequency as the weights.

Not sure but I do the same. 1/frequency is good. You can also do something like 1/softmax(frequency). This one will not break if there is a class that does not exist in dataset (ie 1/frequency -> inf) or extremely infrequent. Depends on your requirements. BTW I never tested it, just made it up.

2 Likes

Can we make use of the ignore label argument to the loss function for a class like background pixels?

Use ignore_index argument. That should do the job.

if some class is not there in your input images then why would you have labels for it? Also to be double sure you can use ignore_index argument as mentioned in the above reply.