Custom multi-target cross entropy loss

Hi, i am writing a custom multi-target cross entropy loss, using the sum of log_softmax for the wanted target classes per sample. Something like:

  def _mce_loss(scores, targets):
                loss = []
                for k in range(len(scores)):
                    loss.append(-F.log_softmax(scores[k])[targets[k]].sum())
                return torch.tensor(loss, requires_grad=True)

I am not sure how backwards() is going to penalize/reward each class, considering that I am just summing the log_softmax score of the wanted classes each time.
Any suggestions?