Autograd Static Error

Hey Guys,

hope you are doing well. I want to use a very cool loss function from a new paper and its using autograd with a staticmethod like:

class RankSort(torch.autograd.Function):
@staticmethod
def forward(ctx, logits, targets, delta_RS=0.50, eps=1e-10):

    classification_grads=torch.zeros(logits.shape).cuda()
    
    #Filter fg logits
    fg_labels = (targets > 0.)
    fg_logits = logits[fg_labels]
    fg_targets = targets[fg_labels]
    fg_num = len(fg_logits)

ect ect.

But if I use the function in my traningsloop like:

 rank_sort = RankSort(k=3)

   ranking_loss, sorting_loss = rank_sort(logits, y,delta_RS =0.5)
    classification_loss = criterion(logits, y)
    loss = classification_loss + ranking_loss + sorting_loss

I got the following error:

RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: Automatic differentiation package - torch.autograd — PyTorch 1.13 documentation

I do not know why because as I understood it I using a static method.

Thank for the Help!

It seems the .apply operation is missing in your code as indicated by the linked tutorial.