How to correctly resolve a imbalance classification problem?

It’s a binary classification problem of a time series.

I just made a vanilla GRU network, to confirm the importance of the features.

Because the samples are very much imbalanced, i.e. the major class samples are far more than the minor class samples(about 250:1), I had to find out a method to resolve it.

At first, I just simply abandon randomly some samples in Class0(major), to make the number of samples of both classes being equal, and then I used them to train my GRU model.
Every epoch, I always shuffle the training set and the validating set, and re-do abandon, and re-select samples of Class0 to build a new training set and a new validating set.

Things went well when training/validating, the metric score(F0.5score) looks like not bad. I guess that the features I constructed are effective.

But when I use the trained model to predict with real data, the model has been flooded with too many samples of Class0(major), and it goes badly with a poor score.

Hence I turned to use WeightedRandomSampler , let as much as possible samples of Class0 to be fed to the model. Approximately after 10 epochs, every samples(regardless Class0 or Class1) have been seen by the model at least one time.

But in this way, it is difficult for the model to learn with the data.

I’m a freshman with deep learning, would some good man please help me?

maybe,you can reweight the loss from those unbalanced classes.

Thanks!

How to do it?
Is there some examples?

You can use Focal loss. But I think the best way to mitigate the class imbalance is to oversample your dataset.