Custom conditional loss function

Hello all,

I am trying to implement a custom loss function, but with a condition

def ReverseSmoothL1Loss(output, target):
    absolute_errors = torch.abs(output - target)
    absolute_errors_squared = torch.squared(absolute_errors)
    loss = torch.where(absolute_errors > 1, absolute_errors_squared, absolute_errors)
    return torch.mean(loss) 

I am getting valid results, but is this implementation valid for autograd?