Custom loss function using argsort

I am new to pyTorch. I want to implement a custom loss function. The output y of my neural network is a one-dimensional tensor (i.e, the output of an embedding(n,1) layer). The loss-function applies some computation on original data sorted based on argsort(y). Here is my function, but it produces RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn. Would you please help me to implement the function:

def myCustomLossFunc3(data_nd, y):
    path = torch.argsort(y)  
    data_sorted = data_nd[path]
    # some computation on data_sorted
    return torch.norm(data_sorted[-1]-data_sorted[0])

Try printing requires_grad for each tensor in your loss. For instance, if data_nd.requires_grad if False, then it won’t work.

Thank you for your help.