so I want to know how does the function proceed when the input is zero, cause in the math formula, log(0.0) will cause a math domain problem , but this does not happen in the pytorch.
Here is the original documentation in the last sentence they reveal clamping the output.
Our solution [regarding the infinity issue] is that BCELoss clamps its log function outputs to be greater than or equal to -100. This way, we can always have a finite loss value and a linear backward method.
Thank you for your answer, but is it always equal to -100? Your answer says that the output will be greater than or equal to -100, so will the output change from time to time? For example, in the former execution the output is -101 but this time it is -100. In my computer, it is always -100, but will it change in some moment?
the document says"Our solution [regarding the infinity issue] is that BCELoss clamps its log function outputs to be greater than or equal to -100",so it is greater than or equal to -100,so what exactly is the output? You know “greater than or equal to -100” is an interval instead of a number. So which number in the interval will be output?
Based on the description, the theoretical output in the range [-Inf, +Inf] is clipped to [-100, +Inf], if I’m not misunderstanding it. Thus the actual value depends on the calculated loss, but is clipped at -100 in its minimum.