Custom losses tend to be way less stable
But just check you are not passing negative values to a log, doing anything/0 these kind of things.
Also have a look at
https://pytorch.org/docs/stable/autograd.html#torch.autograd.detect_anomaly
Still recommend you to check the input data if you apply any more suspicious transform. (Realize normalization of a signal whose values are close to 0 leads to a 0-division for example)