Hello,
I have been experimenting with L1 and MSE losses in Pytorch and noticing that L1Loss performs better because it is more robust to outliers. I discovered SmoothL1Loss which seems to be the best of both worlds. I understand that it behaves like MSELoss for error<1 and like L1Loss otherwise. My dataset only contains values between 0 and 1. Therefore the largest possible error is 1. Does this mean the function behaves identical to MSELoss? Is it possible adjust the threshold in anyway to work better for my problem?
Regards,