Understanding the Loss Value (BCEWithLogitsLoss)

Hello,

I am a little confused by what a Loss function produces.

I was looking at this post: Multi Label Classification in pytorch - #45 by ptrblck

And tried to recreate it to understand the loss value calculated. So I constructed a perfect output for a given target:

from torch.nn.modules.loss import BCEWithLogitsLoss

loss_function = BCEWithLogitsLoss()

# Given are 2 classes
output_tensor = Tensor([[0.0, 1.0]]) # Output of my nn
target_tensor = Tensor([[0.0, 1.0]]) # Target result which is identical to output


loss = loss_function(output_tensor, target_tensor)
# Shouldn't this yield no loss at all?
print(loss.item())

# Output: 0.5032044053077698

If I understand correctly, then this should yield a loss of 0.0, because the target is identical to the output.

But I get: 0.5032044053077698.

Did I miss something?

With best regards,
Patrick

nn.BCEWithLogitsLoss expects logits which are unbound and have values in [-Inf, +inf] and can be seen as “unnormalized” probabilities (I’m sure @tom or @KFrank can give you a proper mathematical definition), so your output_tensor won’t match the targets perfectly. Since your output contains probabilities, use nn.BCELoss instead or low and high values for output_tensor such as torch.tensor([[-1e10, 1e10]]) with nn.BCEWithLogitsLoss.

3 Likes