How to add new loss functions

(Morteza Mohammady Gharasuie) #1

I already defined a loss function in pytorch, but there is an error that I could not find solution. Here is my code:

class cust_loss(torch.nn.Module):
    def __init__(self):
        super(cust_loss, self).__init__()

def forward(self, input, target):
    predicted_labels = torch.max(input, 1)[1]
    minus = torch.max(input, 1)[1] - target
    cust_distance = torch.sum(minus*minus).type(torch.FloatTensor)/predicted_labels.size()[0]
    return cust_distance

######## within main function ######

criterion = cust_loss()#nn.CrossEntropyLoss()        
Optimizer = optim.SGD(filter(lambda p: p.requires_grad, model_conv.parameters()), lr=1e-3, momentum=0.9)
loss = criterion(inputs, labels)
Unfortunately, I got this error:
Traceback (most recent call last):
  File "/home/morteza/PycharmProjects/transfer_learning/", line 250, in <module>
  File "/home/morteza/PycharmProjects/transfer_learning/", line 130, in main
  File "/home/morteza/anaconda3/lib/python3.6/site-packages/torch/autograd/", line 156, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/home/morteza/anaconda3/lib/python3.6/site-packages/torch/autograd/", line 98, in backward
    variables, grad_variables, retain_graph)
  File "/home/morteza/anaconda3/lib/python3.6/site-packages/torch/autograd/", line 91, in apply
    return self._forward_cls.backward(self, *args)
  File "/home/morteza/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/", line 38, in backward
    return maybe_unexpand(grad_output, ctx.a_size), maybe_unexpand_or_view(grad_output.neg(), ctx.b_size), None
  File "/home/morteza/anaconda3/lib/python3.6/site-packages/torch/autograd/", line 381, in neg
    return Negate.apply(self)
  File "/home/morteza/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/", line 224, in forward
    return i.neg()
 AttributeError: 'torch.LongTensor' object has no attribute 'neg'

I could not solve it. I traced the code and compared it with a code that is error free, but I could not solve it. Moreover, I defined my inputs and labels as Variable with “requires_grad=True” parameter.
Please guide me how to solve it.
Thank you.

(Alexis David Jacq) #2

when autograd computes the derivative wrt your variable predicted_labels (derivative of 1/x is -1/x^2) it uses .neg() for the “-”. But predicted_labels is a LongTensor (argmax), for with neg is not implemented.


predicted_labels = torch.max(input, 1)[1].float()

(Morteza Mohammady Gharasuie) #3

Thank you very much for your reply and hint. I could find the problem.
I don’t know the reasons, it seems the problems is due to LongTensor type which must be changed to FloatTensor. In this regard, all lines must be FloatTensor type. So I changed my forward function in cust_loss class as follow and it worked.

   def forward(self, input, target):
        predicted_labels = torch.max(input, 1)[1].float()
        minus = predicted_labels - target.float()
        self.cust_distance = torch.sum(minus*minus).type(torch.FloatTensor)/predicted_labels.size()[0]
        return self.cust_distance