What I have understood from your above snippet is to create a nn.Module and write your above code in its forward function. Am I right? Because I would like to use autograd for backprop!
Were you able to solve this? When i try to set this function as my criterion i get an error saying the function requires the 3 inputs which obviously aren’t calculated yet until training. Maybe this is why it needs to go in nn.module ?
This is the ugly hack i created that works for this problem. 16 outputs with the first output being weighted 8/16ths and the remaining outputs weighted 0.5/15.
I’m sure there’s a better way to do this but if you’re in a hurry this works.
def weighted_mse_loss(input,target):
#alpha of 0.5 means half weight goes to first, remaining half split by remaining 15
weights = Variable(torch.Tensor([0.5,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15,0.5/15])).cuda()
pct_var = (input-target)**2
out = pct_var * weights.expand_as(target)
loss = out.mean()
return loss