What to do when all errors are due to one class?

Hi all, I recently realized that 80% of my errors are due to the fact that my model predicts my class 1 as a class 0. I have 5 classes in total.

I had the idea to pretrain the model using binary classification for class 0 and 1 to make sure the model has initial weights that can make the difference between the two. However I’ve never seen a densenet used for binary classification so I’m not sure how it could be done.

In general, what are some techniques to use when most of the errors are due to a wrong class prediction? augmentation? transformations?


Loss weighting might be another approach.
If you are dealing with an imbalanced dataset, then WeightedRandomSampler might counter this effect.

1 Like

Thanks patrick, I tried that but it decreased the accuracy by around 20%, is there anything to adjust when using a balanced dataloader? LR maybe?

If you are overfitting on the majority class, you could use weighted sampling to trade the sensitivity for the specificity (in a binary use case).
Note that the accuracy for an imbalanced dataset might be misleading as described in the accuracy paradox.