If statement in loss function?

I want to define a loss function, the output/y’s shape is [batch_size, 1, 12, 1, 1], I will calculate the absolute error between output and y, if the error is bigger than a threshold, I will apply a different loss function, this is something like the nn.SmoothL1Loss
Below is my custom loss function

def myLoss(output, y):
    loss = (output - y).abs_()
    for j in range(loss.size(0)):
        if loss[j].mean() > threshold:
            loss[j] = loss[j].add_(1).log_().mul_(50)
    return loss.mean()

This loss function is runnable, but it greatly slows down the training speed (slower 10 times comp to the L1Loss ) .
My question is it possible to apply such loss in a more efficient way?

Try to avoid the loop:

def myLoss(output, y):
    loss = (output - y).abs()
    mask = loss.mean() > threshold 
    loss[mask]  = loss[mask].add(1).log().mul(50)
    return loss.mean()

Maybe you would have to add a .clone() to the indexing line.

Thanks for your reply!
it works fine:

def myLoss(output, y):
    loss = (output.view(512, -1) - y.view(512, -1)).abs_()
    mask = loss > threshold
    loss[mask] = loss[mask].add(1).log().mul(50)
    
    return loss.mean()

and it is faster? or the same speed or even worse?

I think it’s acceptable, just slower 40% , compare to the native L1Loss. It’s much faster than my previous implement .

1 Like