I am working with a highly imbalanced dataset. I have read a few papers using weighted cross-entropy loss for class imbalance. I have also seen some examples using a weighted sampler. However, using a weighted sampler, the model ‘miss’ a large portion of the majority class during each epoch, since the minority class is now overrepresented in the training batches. What is the common practice to deal with imbalanced datasets? Both weighted samplers and weighted cross-entropy used together? Or only one of them is used?
You use one of them only.
The purpose is compensate the over-representation of one class over the rest. To do so you can either modify the sampler (classes are loaded in a balanced way, thus the optimization problem converges towards a joint solution) or penalty the values of the loss (so there is an imbalance in the data but you force the network to learn “more” from those samples which are under represented).
If you use both you end up in a non-balanced scenario again.