How to handle loss function for imbalanced dataset in cnn?

How I can handle loss function for the imbalanced dataset in CNN? Means how I penalized error differently for each class during training? I don’t want to oversample or undersample the data

You could pass the weight argument to your loss function, so that each class index will get the corresponding weighting. Have a look at the docs for more information regarding this parameter.

Thank you once again :slight_smile:

To make a better understanding of the argument weight. If I have an imbalanced dataset for a binary classification task, such that nb_class1 = 3000 and nb_class2 = 1000. Subsequently, i can write the following:

class_weights = torch.FloatTensor([0.25, 0.75])
criterion  = nn.CrossEntropyLoss(weight = class_weights)

Is that the interpretation for the term weight?

You don’t necessarily need to normalize your weights and could e.g. just use the inverse of the class counts.
@K_Frank gives some good examples and intuitions in this post.