Problem about torch.nn.BCELoss for soft labels

Hi there. I found this statement (BCELoss — PyTorch 1.6.0 documentation. ) in PyTorch doc. Does this mean that I could use the torch.nn. BCELoss for soft labels without transforming the soft labels into hard labels with a threshold when calling BCE loss? Thanks.

Hi James!

Yes. BCELoss accepts a target (“labels”) consisting of probabilities
that run over 0.0 to 1.0 (inclusive) (so, “soft labels”). They do not have
to be exactly 0.0 or 1.0 (“hard labels”), although they can be.

As a aside, for reasons of numerical stability, you should use
BCEWithLogitsLoss in preference to BCELoss.

Best.

K. Frank

Thanks for your prompt reply! When calling BCE loss, is there a way to change the soft labels into hard ones by customizing the nn.BCELoss with a specific threshold?

Hi James!

Neither BCEWithLogitsLoss (which you should use), nor BCELoss
(which you should not use) has a built-in thresholding feature.

But it’s easy enough to threshold target (the labels) before you pass
it in:

my_threshold = 0.65
thresholded_target = (target > my_threshold).float()
loss = torch.nn.BCEWithLogitsLoss() (input, thresholded_target)

However, it might not be a good idea to threshold your target – your
ground-truth labels – if they are “soft” (probabilistic). In the soft case
their unthresholded values represent information that you might not
want to discard.

Let’s say that your threshold is 0.65. If your target is 0.649, you
will reward greatly a prediction of 0.0, penalize some a prediction
of 0.65, and penalize greatly a prediction of 1.0. But if, instead,
your target is 0.651, you will penalize greatly a prediction of 0.0,
penalize mildly a prediction of 0.65, and reward greatly a prediction
of 1.0.

Would that really make sense for your use case?

Best.

K. Frank

Got it.Thank you very much.