Dice loss larger than 1?

Hi everyone,

I’m use a dice loss to do a criterion for my evaluation for image segmentation.

my data loader is in shape of (batch_size, channels=1, width, height)

When I get the output from U-Net, its shape is (batch_size, channels=1, width, height) as well.

I reference this dice_coefficient https://github.com/milesial/Pytorch-UNet/blob/master/dice_loss.py
and use the following code to evaluate it:

with torch.no_grad():
    correct = 0
    total = 0
    for i, (images, labels) in enumerate(test_loader):
        images = images.to(device)
        labels = labels.to(device)
        outputs = model(images)
        mask_pred = (F.sigmoid(outputs) > 0.5).float()
        total += (dice_coeff(mask_pred, labels).item())/2
        print("index i: ", i+1)
        print("test result for dice coeff.: ", total / (i+1))

during calculate total, I add /2 because I notice from this discussion https://github.com/pytorch/pytorch/issues/1249 from rogertrullo,"then the loss should go beyond -1. Basically what I do is to add the individual dice scores so the perfect score should be -134. If you want it to be between 0 and minus one, you should divide it by the number of classes_", that it should be divided by the number of classes.

(my data labels contain {0, 1, 2} values for some samples)

I feel puzzled on why the value of dice loss is larger than 1… it seems I face a tricky part, any suggestion is appreciated!

Thank you very much!

Best,
Peter