I have two loss functions that I’m trying to optimize for. One is NLL so the output is almost always negative. The other is L1 Loss so the loss is always positive. They also have different magnitudes. What’s the best way to combine them into one loss? I can add weights like I’ve seen suggested, but that wouldn’t change the sign of the NLL.
The loss of nn.NLLLoss
is bound at [0, Inf]
, so positive. Do you mean the log probability output of your model is negative?
I’m using a custom distribution and loss function, I just meant using negative log-likelihood. My apologies. Is it best to bound my custom nll to [0, Inf]? I can share my code if it helps.
Yes, a “standard” loss function definition would return a zero loss for a perfect prediction and a positive value otherwise.
Gotcha! Thank you. Assuming I have a custom distribution/loss function what should I do to ensure always returns values between [0, Inf]?
For reference here’s the likelihood equation. r, alpha, a, b are outputs of my model and x, tx and T are data.