Hello, I am currently working on semantic segmentation.
Originally, i used only cross entropy loss, so i made mask shape as [batch_size, height, width].
But as i try to adapt dice loss too, i use this code to make mask shape as [batch_size, class_number(=3), height, width]
def target_shape_transform(target):
tr_tar = target.cpu().numpy()
tr_tar = (np.arange(3) == tr_tar[...,None])
tr_tar = np.transpose(tr_tar,(0,3,1,2))
return torch.from_numpy(tr_tar).cuda()
def calc_loss(pred, target, metrics, ce_weight=0.2):
ce = nn.CrossEntropyLoss()
ce_loss = ce(pred,target.long())
target = target_shape_transform(target)
dice = dice_loss(pred, target)
loss = ce_loss * ce_weight + (1.0 - dice) * (1.0 - ce_weight)
return loss
It worked anyway, but i cannot find any other reference how to use ce+dice loss.
Can anybody tell me if this approach is right?