Summarywriter.add_scalar() logs issue

Hi, I experienced a problem in the behavior of tensorboard, when recording a scalar value with summarywriter.add_scalar()

In my experiment I use a set of loss criteria, whose values I record at each epoch, with add_scalar('Lossname', loss.item(), epoch).

Two values are recorded correctly, so in the tensorboard viewer the scalar graphs are displayed correctly.
One value produces a graph with all infinite values, but if I print loss3.item() at runtime, it is a simple float variable with values neither too large nor too small (between 0 and 10).
Why is it not logged correctly?

Also, since the total training loss is a weighted sum of the previous functions, I know that loss3 never goes to infinity because the training is successful.

The three loss functions are as follows:

Why are the values returned by loss3.item() not logged correctly?
I suspect that it is because loss1() and loss2() use pytorch’s default functions: loss1() is the simple MSELoss while loss2() contains L1Loss. In contrast, loss3() is completely hand-defined and perhaps its class is missing some method…