How to adjust SmoothL1Loss for particular need

Hello,

I have been experimenting with L1 and MSE losses in Pytorch and noticing that L1Loss performs better because it is more robust to outliers. I discovered SmoothL1Loss which seems to be the best of both worlds. I understand that it behaves like MSELoss for error<1 and like L1Loss otherwise. My dataset only contains values between 0 and 1. Therefore the largest possible error is 1. Does this mean the function behaves identical to MSELoss? Is it possible adjust the threshold in anyway to work better for my problem?

Regards,

Hi,

The boundary between “like MSELoss” and “L1Loss” is defined by the beta factor (to the best of my knowledge).

first, you should check your model output, not your dataset.
Second, l1 norm also computes sum of all difference in one dim. Although your data is 0-1 range, l1norm could be greater than one i.e. L1([.5,.5,.5]) = 1.5
also as @Apurv25 said, you can change beta .