Hard example mining on PyTorch

I am trying to make a hard example mining implementation on PyTorch, here is the problem, this method needs backward part of (not all) loss. For example, If the batch_size is 10, I only want to backward the losses of 5 example, how can I do that? I have tried codes like
loss[1].backward()
It can run but I am not sure it’s the right way in math.

Two questions:

  1. If you only want 5 examples to affect your loss (and consequently to be used for backprop), why don’t you set your batch_size to 5 instead of 10?

  2. What do you really aim to compute with loss[1].backward()? (Note that a loss function should be a scalar function, i.e. it should return a scalar and not a vector…)

Thank you for your reply

  1. In a classifier, each class has its own loss, for example CrossEntropyLoss and the final loss is usually the value of them. I want to choose 5 classes with highest loss value and just backprop them.
  2. Sorry, the code is illegal, what I want to express is the loss of the first class in a classifier.

@LiuzcEECS, sorry but what you say in 1. is not entirely true. In a classifier, for each training example, each class has its own probability, but not its own loss. Considering, for instance, log-likelihood as the loss function, the loss for that example is -log§, where p denotes de the predicted probability for the true class.

Using the cross-entropy loss and a softmax classifier actually minimizes the cross-entropy between the estimated class probabilities and what should be true distribution. So it’s objective is to properly distributed all its values across all classes in the proper distribution. So you could interpret that as minimizing loss over all class actually

Sure, but in the standard situation where your labels are one-hot, the “loss” for every class will be zero except (possibly) for the true class.