What will happen if the input of torch.nn.functional.binary_cross_entropy is zero

If the input tensot of torch.nn.functional.binary_cross_entropy is zero , and the target tensor is 1.0, the output will be 100.
Here is my code:

input=torch.tensor(0.0)
target=torch.tensor(1.0)
loss=F.binary_cross_entropy(input,target)
print(loss) #tensor(100.)

so I want to know how does the function proceed when the input is zero, cause in the math formula, log(0.0) will cause a math domain problem , but this does not happen in the pytorch.

I am looking forward to hearing from you

BCELoss — PyTorch master documentation

Here is the original documentation in the last sentence they reveal clamping the output.

Our solution [regarding the infinity issue] is that BCELoss clamps its log function outputs to be greater than or equal to -100. This way, we can always have a finite loss value and a linear backward method.

1 Like

Thank you for your answer, but is it always equal to -100? Your answer says that the output will be greater than or equal to -100, so will the output change from time to time? For example, in the former execution the output is -101 but this time it is -100. In my computer, it is always -100, but will it change in some moment?

If the loss of your output creates a value higher or equal to 100 it will clamp it at 100.
So -101 will be -100.

I don’t really understand what you mean by that:

Can you elaborate on that?

the document says"Our solution [regarding the infinity issue] is that BCELoss clamps its log function outputs to be greater than or equal to -100",so it is greater than or equal to -100,so what exactly is the output? You know “greater than or equal to -100” is an interval instead of a number. So which number in the interval will be output?

Based on the description, the theoretical output in the range [-Inf, +Inf] is clipped to [-100, +Inf], if I’m not misunderstanding it. Thus the actual value depends on the calculated loss, but is clipped at -100 in its minimum.

Thanks! Now I understand it.