Pytorch Weighted Regression Loss Functions

Is there an built-in weighted loss function for regression tasks? If there are third party weighted loss functions, please let me know.

I am trying to use Weighted Mean Absolute Error for Unet for semantic segmentation. I need to assign weights so that my model ignores background and focuses on poorly predicted regions.

You can use nn.L1Loss with reduction='none' and add your weights directly to the computed loss before reducing it.
Adding an internal weight for floating point inputs does not make sense as this loss function does not have any class knowledge (e.g. what “background” means for your use case).

Thank you for your response.

May be, I can explain it better. I would like to have weighted loss such that if the target has a value of range 1 to 0. I would like to have lower weights for targets with value 1.0 (background) and increase the weights as the target value reduces from 1.

I wrote a custom loss function for this. Please let me know if there are any other suggestions.

class WeightedMAELoss (nn.Module):

def __init__(self, beta=1.0, reduction='mean'):
    super(WeightedMAELoss, self).__init__()
    self.beta = beta
    self.reduction = reduction

def forward(self, inputs, targets):
    diff = torch.abs(inputs-targets)

    with torch.no_grad():
        weights = torch.exp(-self.beta * targets)
    weighted_diff = weights * diff

    if self.reduction == 'mean':
        return weighted_diff.mean()
    elif self.reduction == 'sum':
        return weighted_diff.sum()
    else:
        return weighted_diff

You might want to normalize the loss e.g. by dividing by the weights.sum() as otherwise the target distribution (the frequency of 0s and 1s) would influence the loss scale.

1 Like