Loss function error when backward

Hi @ptrblck , @albanD
I have problem as

I have tried to make custom loss function of that tripletloss. forward() was working but when calling loss.backward()
it is giving alot of errors, I tried to debug it and as Alban said forward with two inputs returns 2 things, but my mind still cant comprehent what to do with that. Sorry, if i was annonying.

Also in custom autograd after getting two vectors v1 and v2 , I detached from Tensor and tried to do operations in numpy and at last converted numpy and returned torch tensor. .apply(v1,v2) . As that given autograd tutorial

    def forward(ctx, input):
        """
        In the forward pass we receive a Tensor containing the input and return
        a Tensor containing the output. ctx is a context object that can be used
        to stash information for backward computation. You can cache arbitrary
        objects for use in the backward pass using the ctx.save_for_backward method.
        """
        ctx.save_for_backward(input)
        return input.clamp(min=0)

input was directly fed but my case is I have done numpy operations on it,

I am also confused what should i used for ctx.save_for_backward() tried to use both last converted torch tensor and also v1 and v2 but I am not getting. I am also confused what things should be done in backward in my case.
Thank you for your help .

My custom autograd loss code is below

class TripletLoss(torch.autograd.Function):
    @staticmethod
    def forward(ctx, v1, v2, margin=0.25):
        
        scores = np.dot(v1.detach().numpy(), v2.detach().numpy().T)
        batch_size = len(scores)
        positive = np.diag(scores) # the positive ones (duplicates)
        negative_without_positive = scores - 2.0 * np.identity(batch_size)
        closest_negative = negative_without_positive.max(axis=1)
        negative_zero_on_duplicate =  scores * (1.0 - np.eye(batch_size))
        mean_negative = np.sum(negative_zero_on_duplicate, axis=1) / (batch_size - 1)
        triplet_loss1 = torch.Tensor(np.maximum(0.0, margin - positive + closest_negative))
        triplet_loss2 = torch.Tensor(np.maximum(0.0, margin - positive + mean_negative))
        triplet_loss = torch.mean(triplet_loss1 + triplet_loss2)
        triplet_loss.requires_grad=True
        ctx.save_for_backward(triplet_loss)
        return triplet_loss
    
    @staticmethod
    def backward(ctx, grad_output):
        input = ctx.saved_tensors
        grad_input = grad_output.clone()
        grad_input[v1<0] = 0
        grad_input[v2<0] = 0
        return grad_input