Hi everyone,

I’m currently facing a problem trying to create my own loss function. After watching several tutorials, I thought I had done it the good way, the problem is that my loss does not decrease. It’s perfectly constant. I used Variables with requires_grad=True and only performed torch operations so I really do not understand. Does anyone have an idea?

You can find my code below:

class DistanceLoss(torch.nn.Module):

```
def __init__(self):
super(DistanceLoss, self).__init__()
def forward(self, output, target):
output = Variable(output, requires_grad=True).to('cuda')
target = Variable(target, requires_grad=True).to('cuda')
binarized_output = torch.argmin(output, 1).type(dtype)
one_hot_output = torch.nn.functional.one_hot(binarized_output.to(torch.int64)).type(dtype)[0, :, :, :]
for c in range(2):
target_coordinates = ((target[:, :, c] == 1).nonzero(as_tuple=False)).type(dtype)
output_coordinates = ((one_hot_output[:, :, c] == 1).nonzero(as_tuple=False)).type(dtype)
dist_matrix = torch.cdist(output_coordinates, target_coordinates,
p=2.0, compute_mode='use_mm_for_euclid_dist_if_necessary')
loss = torch.sum(torch.amin(dist_matrix, 1))
return Variable(loss, requires_grad=True).to('cuda')
```

What I understand is that the different operations I perform from the predicted image to the computed loss do not preserve the grad_fn of the different tensors. Nevertheless, I must compute these operations in order to calculate tha loss. Is there a way I can do it?

Thank you very much!