How to reduce CrossEntropyLoss mean

I want to weight each pixel to compute my loss function. Now first I calculate cross entropy loss with reduce = False for the images and then multiply by weights and then calculate the mean. If I choose all the weights as 1, I should get a consistent result. But its not the case.

loss_function = torch.nn.CrossEntropyLoss(weight=weight, reduce=False)
loss_function_reduce = torch.nn.CrossEntropyLoss(weight=weight)

loss_tensor = loss_function(input, target)
weights = torch.ones(loss_tensor.size())
loss_tensor = (loss_tensor * weights).mean()

loss_tensor_reduced = loss_function_reduced(input, target)

Could someone help here ?

Hi, I think it’s a matter of digging into source code. If I’m not wrong it performs sample-wise mean. Later it averages. In addition, weights are multiplied batch-wise (I think). How are you setting those parameters?

I tried with plain .mean() on all data as well as

loss = (losses_ts.sum(dim=0)/batch_size).mean()

where losses_ts is the output of CrossEntropyLoss with reduce=False

I figured it out. It works like this
Assuming a output with B x H x W

loss = (loss_unreduced.sum(dim=(1,2))/weights[target].sum()).sum()