Compute weights for L1 loss

I have a dataframe and i want to apply regression. But i want to use weighted L1 loss, so i calculated the weight of each label.

bins = np.digitize(train_df.label.values,[50,70,90], right=False)
weights=np.select([bins==1,bins==2,bins==3],[0.75,0.5,0.75])
weights=dict(zip(train_df.label.values,weights))

Now i have a dictionary which contain label and its corresponding weight. I want to pass these to my loss function

If i print the loss value after setting reduction='none'
i got following output

tensor([72.0564, 85.8526, 86.7617], device='cuda:0',
       grad_fn=<SmoothL1LossBackward>)

but i am confused how to calculate weighted loss.

I am trying some thing like this

loss=self.criterion(out,batch["ground_label"].float())
w_ar=torch.tensor([weights[i] for i in batch["ground_label"].numpy()]).cuda()
loss=torch.sum(loss*w_ar)

I dont know, this approach is correct or not, and this approach is slow because of cpu gpu conversion, how can i make it faster

1 Like

If the tensor creation is slow in your code, you could calculate the weights once, store them on the GPU and index them using the current label tensor.

Your general approach look alright, but you should note that a weighted loss is usually normalized to avoid a dependency on the currently used class label distribution in the batch as described e.g. here.

1 Like