Negative values on DICE loss during training

I am training an Unet arch on Medical image semantic segmentation, supervised learning - images/masks and I am getting weird numbers for DICE loss

Vanila_UNet
Epoch [0]
Mean loss on train: -140.31943819224836 
Mean DICE on train: 1.7142934219089918 
Mean DICE on validation: 1.8950854703170916
Epoch [1]
Mean loss on train: -154.01165542602538 
Mean DICE on train: 1.8439450739097656 
Mean DICE on validation: 1.923283325048502
Epoch [2]
Mean loss on train: -155.57704811096193 
Mean DICE on train: 1.8617926383475962 
Mean DICE on validation: 1.9318473889899364
Epoch [3]
Mean loss on train: -156.61962712605794 
Mean DICE on train: 1.8733720566917649 
Mean DICE on validation: 1.933697909810023
Epoch [4]
Mean loss on train: -157.22541224161785 
Mean DICE on train: 1.8788127825940564 
Mean DICE on validation: 1.9533974303968433

I am using argumentation for normalisation. However, my mask stays min 0 and max 255. Image is normalised. In this scenario I get some numbers and the network is trying to actually do something, see image https://imgur.com/a/WqRhbaM

If I change when loading the data to dataloader and if I divide mask with 255., my predictions are all Mean DICE on train and Mean DICE on validation are 0, mean loss still negative but small negative ~ -0.025. The final NN prediction is blank image.

I assume the problem is in data loading:

class DukePeopleDataset(Dataset):

    def __init__(self, df, img_w, img_h):
        self.IMG_SIZE_W = img_w
        self.IMG_SIZE_H = img_h
        self.df = df
        self.in_channels = 3
        self.out_channels = 1
        
        self.transforms = self.define_transorms()
        
    def __len__(self):
        return len(self.df)

    def __getitem__(self, idx):
        image = cv2.resize(cv2.imread(self.df.iloc[idx, 0]), (self.IMG_SIZE_W, self.IMG_SIZE_H))
        mask = cv2.resize(cv2.imread(self.df.iloc[idx, 1],0), (self.IMG_SIZE_W, self.IMG_SIZE_H))
#         mask = mask/255.

        augmented = self.transforms(image = image,
                                      mask = mask)
        
        image = augmented['image']
        mask = augmented['mask']
        mask = mask.unsqueeze(0)
        
        return image, mask

    def get_dataframe(self):
        return self.df

    def define_transorms(self):
        transforms = A.Compose([
            A.HorizontalFlip(p=0.5),
            A.Normalize(p=1.0),
            ToTensorV2(),

        ])
        return transforms

I don’t know which dice loss implementation you are using but check which value ranges are expected for the model output (e.g. logits or probabilities) and the target.
I would expect the target should contain values in [0, 1].

Is there a DICE loss implementation that you recommend?

1 Like

Yes, there are plenty of implementations online. Search for code available on kaggle ‘image segmentation’