Per-class and per-sample weighting

For the class weighting I would indeed use the weight argument in the loss function, e.g. CrossEntropyLoss.
I assume you could save a tensor with the sample weight during your preprocessing step.
If so, you could create your loss function using reduction='none', which would return the loss for each sample. Using this you could return your sample weights with your data and target for the current batch, multiply it with the loss, and finally calculate the average before calling backward().

7 Likes