SoftMarginRankingLoss Implementation

Hi,

I am trying to implement a custom loss function softmarginrankingloss. The Size of my input vectors is N x C x H x W. (128,64,14,14). It is basically the output of a VGG16 at conv5. However, my implementation gives me the following error due to gradient back propagation.

Error:

File “train.py”, line 431, in
main()
File “train.py”, line 325, in main
loss.backward()
File "/usr/local/lib/python3.8/dist-packages/torch/tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File “/usr/local/lib/python3.8/dist-packages/torch/autograd/init.py”, line 166, in backward
grad_tensors
= make_grads(tensors, grad_tensors, is_grads_batched=False)
File “/usr/local/lib/python3.8/dist-packages/torch/autograd/init.py”, line 67, in _make_grads
raise RuntimeError(“grad can be implicitly created only for scalar outputs”)
RuntimeError: grad can be implicitly created only for scalar outputs

class SoftMarginRankingLoss(torch.nn.Module):
    def __init__(self, weight=None, size_average=True, alpha =1 ):
        super(SoftMarginRankingLoss, self).__init__()
        
        self.apha = alpha
        
    def forward(self, anchor, positive_match, negative_match):
      
        distance_pos = F.pairwise_distance(anchor, positive_match, keepdim = True)

        distance_neg = F.pairwise_distance(anchor, negative_match, keepdim = True)

        distance = distance_pos - distance_neg
        loss = torch.log(1+torch.exp(self.apha*distance))

        return loss

The error you encountered is because the SoftMarginRankingLoss is not returning a scalar value, but instead, it returns a tensor with a shape derived from the input tensors. The backward() function expects a scalar value for the loss. To fix this, you need to aggregate the loss tensor to a scalar value before returning it. You can do this by calculating the mean or sum of the loss tensor.

1 Like

Thanks that solved the problem!

I am glad it solved your problem