Hi,
I am using a custom loss function in PyTorch. When using a standard loss function like CrossEntropyLoss() I can simply do a .cuda(). How do I take my custom loss on cuda() ?
The tensors and model are both on cuda, and nvidia-smi shows memory being used too. But I’m not sure about the importance of criterion.cuda() and how to do that with my own loss function.
For reference, here’s the loss -
class SpandanLoss(torch.nn.Module):
'''
custom loss for our model
'''
def __init__(self):
super(SpandanLoss, self).__init__()
def forward(self, model_output,margin = 0.8):
mis_classifs = 0
classifs=[]
total_loss = 0
for i in range(0,len(model_output),2):
true_val = model_output[i].data
false_val = model_output[i+1].data
total_loss += torch.max(torch.zeros(1).cuda(),margin - true_val + false_val)
return Variable(total_loss,requires_grad = True)
Thanks!