I have a class imbalance where class are split like this:
0: 5.2%
1: 6.51%
2: 76.7%
3: 6.38%
4: 5.13%
I’m computing class weights with
from sklearn.utils.class_weight import compute_class_weight
class_weights = compute_class_weight(class_weight='balanced', classes=np.unique(ys), y=ys)
# produces : class_weights = [3.8412954, 3.06959094, 0.26054039, 3.13274541, 3.8984915]
...
loss_function = nn.CrossEntropyLoss(weight=class_weights)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
However I still see that the model have bias towards predicting class 2 and not really learning anything other class.
Any advice how to address this?