I wrote a custom loss function; Dice coefficient loss (DCWithLogitsLoss) but I dont know what am missing out as it keep giving me error of "Element 0 of tensors does not require grad and does not have a grad_fn.
But when I used BCEWithLogitsLoss, I don’t get any error.
Here is my code:
“”“Dice Coefficient loss function with sigmoid activation”""
def __init__(self): super().__init__() def __call__(self, SR, GT): eps = 1e-5 assert SR.shape == GT.shape, "Predicted and Groundtruth images must have same size!" #SR = torch.sigmoid(SR) SR = (SR > 0.5).float() inter = SR * GT union = torch.sum(SR ** 2) + torch.sum(GT ** 2) + eps score = (2 * inter.float() + eps) / (union.float()) return 1. - score