Dice loss implementation issues (on smooth/eps factor)

I’m running into a wall, where I “can’t win”, in my current soft dice implementation.

To recap, soft dice is:
2TP / (2TP + FP + FN)

One way to smooth it is to at eps at say 1e-8:
2TP / (2TP + FP + FN + eps)
When TP is exactly zero, it works, in the sense you don’t get NaNs. However, if TP is smaller than eps, you get a division over a big number which blows up to infinity. I’m getting a NaN/inf error when training with this loss function right now. I don’t think is NaN, because of eps, so I can’t be diving by zero. By elimination, the error has to be inf?

The other way to smooth is to add 1. This is what I did previously.
(2TP + 1) / (2TP + FP + FN + 1)
This ONLY works if TP is large. If you have a situation where there are no positive pixels, and for simplicity no FP and no FN, you get 1/1=1, which is zero loss (since loss is 1 - dice), and I’m not a 100% sure I want that. That’s because I have half my dataset with no positive pixels, and so this earlier loss implementation is “incorrectly” giving it a good score, when all the model is learning to predict is say everything is background pixel.

The 3rd option, I’m thinking is eps both numerator and denominator.
(2TP + eps) / (2TP + FP + FN + eps)
But the issue is see with this is that if TP, FP, FN = 0, then I will get eps/eps =1, which is also not want I want!

How do I fix this? It seems that in all scenarios, I get some kind of edge case that ruins it.