NLLLoss with importance sampling

Hi. I just started PyTorch.
I would like to compute NLLLoss with importance sampling.
That is,

- \sum_{i=1}^{N} w_i log p(t_i | x_i)

where w_i is a weight for i-th sample.
Note that this weight is different from the weight for a class which is typically used for unbalanced dataset.

It seems that PyTorch NLLLoss is written in C (and that’s why PyTorch is fast?)
I really appreciate if anyone could tell me a simple way to implement NLLLoss with importance sampling.

Thanks.

1 Like

you can always implement a modified NLL loss in python using just torch.* ops, the autograd will take care of the backward for you.