Dice Loss + Cross Entropy

Hello everyone,
I don’t know if this is the right place to ask this but I’ll ask anyways.
I am working on a multi class semantic segmentation problem, and I want to use a loss function which incorporates both dice loss & cross entropy loss. How do I use this?
I dont think a simple addition of dice score + cross entropy would make sense as the dice score is a small value between 0 & 1, but the cross entropy value can also take very large values.
So is there like some kind of normalization that should be performed on the CE loss to bring it in the same scale to that of the dice loss?
Thanks in advance

Hello, I have exactly the same problem. Have you find something interesting ?

For the moment I am trying to normalize the CrossEntropyLoss by setting reduction parameter to True. The problem is as you said… CrossEntropy could take values bigger than 1.

I am actually trying with Loss = CE - log(dice_score) where dice_score is dice coefficient (opposed as the dice_loss where basically dice_loss = 1 - dice_score. I will wait for the results but some hints or help would be really helpful