Error in Custom Loss function

I wrote a custom loss function; Dice coefficient loss (DCWithLogitsLoss) but I dont know what am missing out as it keep giving me error of "Element 0 of tensors does not require grad and does not have a grad_fn.
But when I used BCEWithLogitsLoss, I don’t get any error.

Here is my code:

class DCWithLogitsLoss(nn.Module):
“”“Dice Coefficient loss function with sigmoid activation”""

def __init__(self):                                                                         
    super().__init__()                                                                      
                                                                                            
def __call__(self, SR, GT):                                                                 
    eps = 1e-5                                                                              
    assert SR.shape == GT.shape, "Predicted and Groundtruth images must have same size!"    
                                                                                            
    #SR = torch.sigmoid(SR)                                                                 
    SR = (SR > 0.5).float()                                                                 
    inter = SR * GT                                                                         
    union = torch.sum(SR ** 2) + torch.sum(GT ** 2) + eps                                   
    score = (2 * inter.float() + eps) / (union.float())                                     
                                                                                            
    return 1. - score

The threshold operation is not differentiable, so you are detaching the result from the computation graph:

SR = torch.randn(10, 10, requires_grad=True)
SR = (SR > 0.5).float() 
print(SR.requires_grad)
> False

You could take at e.g. the kornia implementation of dice loss and reuse it.

Thank you for the response. I will use it.