Hello everyone, i am trying to use dice loss for my 3D point cloud semantic segmentation model.
Although, I have implemented the function by referencing some of the codes, I am not sure whether it is correct as my IoU for my validation set does not increase compare to using cross entropy loss solely.
Below is my function for multi class dice loss:
def diceLoss(prediction_g, label_g, num_class, epsilon=1): ls =  diceRatio_g = 0 label_g= one_hot(label_g,15).to(device) #one hot encode 15 classes label_g = label_g.reshape(batch_size*16384, -1) #16384 is number of point cloud (batchsize*n_pts,15) prediction_g = prediction_g.reshape(batch_size*16384,-1) #reshape so (batchsize*n_pts,15) for i in range(num_class): pred = prediction_g label = label_g pred=torch.nn.functional.softmax(pred, dim=1)[:, i] #select ith index in softmax pred = pred.reshape(-1,1) #bs*pts*1 diceLabel_g = label.sum(dim=0) diceLabel_g = diceLabel_g[i] dicePrediction_g = pred.sum(dim=0) diceCorrect_g = (pred * label)[:,0] diceCorrect_g = diceCorrect_g .sum(dim=0) #print(diceLabel_g,dicePrediction_g,diceCorrect_g) diceRatio_g += (2 * diceCorrect_g + epsilon) \ / (dicePrediction_g + diceLabel_g + epsilon) loss = 1-(1/num_class)*diceRatio_g #print(loss) return loss
Please have a look and let me know if there is any problem. Thank you.