Setting weight to the NLLLoss for imbalanced data

I have 2 imbalanced classes with the size 3089 and 6179. How I can to weight the loss function in order to penalize errors on the smaller class. I am using nn.NLLLoss()

WeightedRandomSampler might be helpful to you.
Here’s a discussion about it.

Hi Liu!

You can use the weight argument that is passed to NLLLoss's
constructor.

As an aside, if you do, in fact, have only two classes, you could
treat your problem as a binary classification problem (rather than
a general multi-class problem that happens to have two classes),
use BCEWithLogitsLoss, and use its pos_weight argument.

(Aman’s WeightedRandomSampler suggestion is also reasonable.
It and adding weights to the loss function are two somewhat different
approaches for compensating for imbalanced data.)

Best.

K. Frank