Making custom loss cuda compatible

Hi,

I am using a custom loss function in PyTorch. When using a standard loss function like CrossEntropyLoss() I can simply do a .cuda(). How do I take my custom loss on cuda() ?

The tensors and model are both on cuda, and nvidia-smi shows memory being used too. But I’m not sure about the importance of criterion.cuda() and how to do that with my own loss function.

For reference, here’s the loss -

class SpandanLoss(torch.nn.Module):
'''
custom loss for our model
'''

def __init__(self):
    super(SpandanLoss, self).__init__()
    
def forward(self, model_output,margin = 0.8):
    mis_classifs = 0
    classifs=[]
    total_loss = 0
    for i in range(0,len(model_output),2):
        true_val = model_output[i].data
        false_val = model_output[i+1].data
        total_loss += torch.max(torch.zeros(1).cuda(),margin - true_val + false_val)
                                
    return Variable(total_loss,requires_grad = True)

Thanks!

2 Likes

Still waiting on someone to reply on this! Any help appreciated!

your custom loss seems to already be cuda ready. In general, you dont need to call .cuda() on a model or loss unless it has some learnable nn.Parameter parameters (for example in a ResNet or RNN).

1 Like

Thanks! and if it does have learnable parameters? Because for a custom function, I can’t do a .cuda()

I change every Variable in the custom loss function into cuda. This method is slow but it works for me. I hope somebody else can come up with a fast way.

1 Like

Creating device argument in __init__ method and passing.to(device) to all tensors and problem solved in my case.