Adding a constant to the loss function


(Silviu) #1

Hello! I need to write a slightly modified percentage error loss function, with a threshold for the denominator. The part of code relevant to it looks like this:

threshold = 290000

def exp_rmspe(pred, target):
    loss = torch.abs(target - pred)/np.maximum(target,threshold)
    return loss.mean()

I have a batch size of 128. When I am using this, I get this error

RuntimeError: bool value of Variable objects containing non-empty torch.cuda.ByteTensor is ambiguous

How can I add that threshold constant in the loss function, without getting that error? Thank you!


(Alban D) #2

Hi,

Is this error coming from this function? Also to avoid any issue, I would advise against mixing numpy and torch functions, you can use torch.max().