How many num_samples in WeightedRandomSampler?

Does it make sense to choose a num_samples larger than the actual sample size, in order to be sure that the weighted random sampling actually gets “enough” samples from each class?

Based on the example here the num_samples is set to the sample size. But I am trying:

        weights = [1.0 / x for x in class_counts]
        label_wts = [weights[x] for x in labels]
        label_wts = torch.from_numpy(np.array(label_wts))
        num_samples = int(
            2 * class_counts['minimum_class'] * len(class_counts))

Is that enlarged num_samples necessary?

Thanks, Micha

I don’t think it’s necessary since increasing the number of epochs should be the same effect, shouldn’t it?

Thanks, that certainly clarifies.