How to implement weighted mean square error?

Hello guys, I would like to implement below loss function which is a weighted mean square loss function:


How can I implement such a lost function in pytorch? In another words, Is there any way to use nn.MSELoss to achieve to my mentioned loss function?

You can probably use a combination of tensor operations to compute your loss.
For example

def mse_loss(input, target):
    return torch.sum((input - target) ** 2)

def weighted_mse_loss(input, target, weight):
    return torch.sum(weight * (input - target) ** 2)
11 Likes

What I have understood from your above snippet is to create a nn.Module and write your above code in its forward function. Am I right? Because I would like to use autograd for backprop!

1 Like

You don’t need to write a nn.Module for that, a simple function is enough.
Note that backpropagation is handled by Variables, and not by nn.Module.

So this is perfectly fine

def pow(input, n):
    return input ** n

t = Variable(torch.rand(1), requires_grad=True)
t2 = pow(input, 2)
t6 = pow(t2, 3)
t6.backward()
3 Likes

Thank you. :slight_smile: I appreciate your response!

1 Like

Were you able to solve this? When i try to set this function as my criterion i get an error saying the function requires the 3 inputs which obviously aren’t calculated yet until training. Maybe this is why it needs to go in nn.module ?

This is the ugly hack i created that works for this problem. 16 outputs with the first output being weighted 8/16ths and the remaining outputs weighted 0.5/15.

I’m sure there’s a better way to do this but if you’re in a hurry this works.

def weighted_mse_loss(input,target):
    #alpha of 0.5 means half weight goes to first, remaining half split by remaining 15
    weights = Variable(torch.Tensor([0.5,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15])).cuda()  
    pct_var = (input-target)**2
    out = pct_var * weights.expand_as(target)
    loss = out.mean() 
    return loss
2 Likes

your question is somehow irrelevant.

Hi,
It’s not a big deal but why isn’t weighting provided in default implementation since all (atleast most) other loss functions have it?

3 Likes

Actually,

def weighted_mse_loss(input, target, weight):
        return (weight * (input - target) ** 2).mean()

is slightly better, since the sum is some big. Taking the average is exactly the original way that nn.MSELoss does.

1 Like

I think it is better divided by the sum of weight instead of taking average cause it is how the weighted cross entropy loss implemented

def weighted_mse_loss(input, target, weight):
        return (weight * (input - target) ** 2).sum() / weight.sum()