How to interpret class weights when using CrossEntropyLoss

Hi all, my objective is to have the model prioritize class 1 and 2.
I’m currently doing the following:

weights = torch.tensor([5., 10., 10., 5., 5.])
class_weights = torch.FloatTensor(weights).cuda()
criterion_weighted = nn.CrossEntropyLoss(weight=class_weights)

I’d like to know by how much is the model currently prioritizing those classes. Are the class weights relative to the sum of them? (5/35), (10/35)?

The loss for the current sample will be multiplied with the corresponding class weight and the final loss will be normalized by the sum of all used weightes (if reduction='mean' is used).
The docs give the formula, while this post gives you a manual example.

So the “prioritization” is relative to the used samples, not absolute.

1 Like