Why inf loss when add .log_() to the output?

My model works fine before, but now I want to use the logarithm of the origin output to calculate the loss. However, the loss becomes inf . I have printed the value of output.abs_().log_() and y.log_() , the are all normal value (no inf), and my loss function is nn. L1Loss() but the loss value is Inf .
So have I missed something or done something wrong? I only want to calculate the L1Loss using the logarithm of the original output.

Hi,

The thing is that if your input goes to 0 (or very close to it), then the output of the log will be inf. Is that what happens? Maybe you can try adding an epsilon to your input before taking the log to avoid such problem.

1 Like

Thanks for your answer! This is the problem. But the tricky point is when output[0] and y[0] is not zero number ,then the loss is valid number but if some of the output (for example, output[4]) has zero, then the whole loss will be inf, not only the loss[4] is inf .

Yes nans and infinities will propagate.
Unfortunately, you need to be careful when working with functions like log or inverse to avoid have such values.