I’ve an unbalanced dataset for a 3 classes output.
I managed to have well stratified train and test sets.
My question please: Using WeightedRandomSampler gives me bad results rather than without using it. (74% vs 91% accuracies)
Is there any logical explanation for this ?
Last question: assuming all my dataset was well balanced, I’m I required to have balanced mini-batches during the epoch ? Can I have for example first mini-batches with only 2 classes, and the last ones with the last class ? Is this okay for the training ? note that this question isn’t pertaining to my use.
Thank you very much,