Weights for Cross Entropy Loss that vary every batch


I have a dataset on which I do segmentation and classify each pixel in an image as 0 or 1. The classes are imbalanced and there is variance in the imbalance depending upon the image. So for some images the imbalance factor is 30(i.e class 1 pixels are 30 times less than class 0 pixels) and for some images, the imbalance factor is 70.

One solution is to set the weight parameter for Cross Entropy Loss based upon the imbalance calculated over entire dataset, this comes out to be 50.

To other solution is to sort the images with respect it imbalance factor and then calculating the weights of cross entropy loss separately for each mini batch. So basically, we are giving different weights to cross entropy loss for each mini batch.

My question is about the later approach ? Is it a valid approach ?

1 Like