Customized loss function lost grad_fn

I tried to build my own loss function that uses the torch.sort (it is a sequential ranking problem). Here is the code:

def loss_fn(output, target):
sorted, indices =torch.sort(output,descending=True)
loss = torch.mean((output - target)**2)
loss2 = torch.mean((indices - target)**2)
return loss2

When I return the loss, there’s no error. However, when I use loss2, I get the runtime error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.

This error cannot be fixed by setting

loss2.requires_grad=True

When I look into the tensor, I think the only difference between loss and loss2 is that loss has “grad_fn= MeanBackward0”. How could I add this to loss2 so that I can use my own loss function?

Hi Leo!

The core problem is that indices consists of discrete integers (in particular,
longs), so it is not (usefully) differentiable.

This means that what you are trying to do probably doesn’t make sense.
But if it were to make sense somehow, you would have to find a usefully
differentiable proxy for indices that somehow captured something relevant
to output’s sort order (but was still differentiable) and compute your loss
function from that.

Best.

K. Frank