Hi all, I recently realized that 80% of my errors are due to the fact that my model predicts my class 1 as a class 0. I have 5 classes in total.
I had the idea to pretrain the model using binary classification for class 0 and 1 to make sure the model has initial weights that can make the difference between the two. However I’ve never seen a densenet used for binary classification so I’m not sure how it could be done.
In general, what are some techniques to use when most of the errors are due to a wrong class prediction? augmentation? transformations?
If you are overfitting on the majority class, you could use weighted sampling to trade the sensitivity for the specificity (in a binary use case).
Note that the accuracy for an imbalanced dataset might be misleading as described in the accuracy paradox.