Get the value of crossentropy loss for each sample in the batch


I want to get the value of the nn.crossentropy loss for each sample in the batch. Can someone please guide me?

For example: i have 5 samples in the batch. So i have 5 target values and 5 prediction values. So if do something like this:

Loss = nn.crossentropy(ypred, ytarget)

Now this loss is a scalar for all the samples in the batch. Now i want to get the loss as a vector of size equals to the elements in the batch. So if the batch has 5 elements, so the loss has vector of size 5 elements. I want to know this for nn.bce and nn.mse and nn.kld loss. I guess it will be same for all the losses. I want to do it because i want to multiply it with a factor to each loss per sample. For the final loss value i will do it by torch.sum

The default reduction is set to mean. You just need to specify reduction='none' and you will get this vector that you want.

criterion= nn.CrossEntropyLoss(reduction='none')

loss = criterion(y_pred, y_target) # ← This will give you a tensor

Here is the documentation for nn.CrossEntropyLoss.