I tried to build my own loss function that uses the torch.sort (it is a sequential ranking problem). Here is the code:
def loss_fn(output, target):
sorted, indices =torch.sort(output,descending=True)
loss = torch.mean((output - target)**2)
loss2 = torch.mean((indices - target)**2)
return loss2
When I return the loss, there’s no error. However, when I use loss2, I get the runtime error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.
This error cannot be fixed by setting
loss2.requires_grad=True
When I look into the tensor, I think the only difference between loss and loss2 is that loss has “grad_fn= MeanBackward0”. How could I add this to loss2 so that I can use my own loss function?