Loss Based Sampling for Minibatch Creation

I want to create a minibatch based on the loss values for the samples. Loss based sampling simply means to sample data points based on loss values.
I want to recreate the results of this paper.
Here is code I implemented but i am not getting the desired results I am getting around 75% Test Accuray on Mnist.

class LossSampler(Sampler):
    r"""Samples elements randomly, without replacement.
    Arguments:
      data_source (Dataset): dataset to sample from
    """

    def __init__(self, model, data_source, train_data, train_target, batch_size):
      self.model = model
      self.data_source = data_source
      self.batch_size = batch_size
      self.data = train_data

      self.data = torch.unsqueeze (self.data, 1)
      self.data = self.data.type (torch.cuda.FloatTensor)
      self.target = train_target

    def get_scores(self):
        output, feat = self.model.forward (self.data)
        criterion = nn.CrossEntropyLoss (reduce=False)
        loss = criterion (output, self.target)
        return (loss, feat)

    def __iter__(self):
        num_batches = len (self.data_source) // self.batch_size
        while num_batches > 0:
            scores, feat = self.get_scores ()
            sampled = []
            sampled = torch.multinomial (scores, num_samples=self.batch_size)
            yield sampled
            num_batches -= 1

    def __len__(self):
        return len(self.data_source)

Is there any glaring error in the code that you can see.