Loss to penalize overestimation

I am working on a regression problem and I am using MSE loss. But in my case, I need to penalize late predictions more than early predictions. In other words, I need to penalize (y_pred-y_true)>0 more than (y_pred-y_true)<0. I would appreciate it if someone recommends the best suitable loss function that can also be smooth like MSE?

The sigmoid function is a soft version of the step function, so you can do:

diff = y_pred - y_true
diff2 = diff**2
mask = torch.sigmoid(diff)
losses = diff2 * (1 - mask) + diff2 * mask * 2
loss = losses.mean()

This penalizes (y_pred - y_true) > 0 approximately twice as much as the reverse. If we set diff to torch.linspace(-5, 5, 11), then losses ends up as:

tensor([25.1673, 16.2878,  9.4268,  4.4768,  1.2689,  0.0000,  1.7311,  7.5232,
        17.5732, 31.7122, 49.8327])
1 Like

Hi There,

For this purpose, I think that using Leaky ReLu on the loss and then back propagation will help you as it traces f(x) = x for x > 0 and f(x) = negative slope * x for x < 0. Here keeping negative slope between (0, 1) will help you.
Be sure to check out torch.nn.LeakyReLU.

All the best.

1 Like

Thank you so much for your response.

Ok, sure I will check LeakyRelu. Thanks a bunch