Mask shapes for dice loss + cross entropy loss

Hello, I am currently working on semantic segmentation.
Originally, i used only cross entropy loss, so i made mask shape as [batch_size, height, width].
But as i try to adapt dice loss too, i use this code to make mask shape as [batch_size, class_number(=3), height, width]

def target_shape_transform(target):

    tr_tar = target.cpu().numpy()
    tr_tar = (np.arange(3) == tr_tar[...,None])
    tr_tar = np.transpose(tr_tar,(0,3,1,2))
    
    return torch.from_numpy(tr_tar).cuda()

def calc_loss(pred, target, metrics, ce_weight=0.2):
    ce = nn.CrossEntropyLoss()
    ce_loss = ce(pred,target.long())
    target = target_shape_transform(target)
    dice = dice_loss(pred, target)
    loss = ce_loss * ce_weight + (1.0 - dice) * (1.0 - ce_weight)
    return loss

It worked anyway, but i cannot find any other reference how to use ce+dice loss.
Can anybody tell me if this approach is right?

Hello Hwarang,

It is correct to combine loss and backpropagate it this way. Your network parameters will be updated with respect to ce_loss and dice.

1 Like