I still don't understand pytorch WeightedRandomSampler

I have a dataset with 22 classes total. 21 classes have 90 images and 1 class has 11 images for a total of 1901 images.

Either I don’t understand the pytorch WeightedRandomSampler instructions, the instructions are wrong, or both. I can certainly say that they are not nearly as clear as directions for other classes.

I used replacement=True throughout. I created an ImageFolder instance and used it to create a DataLoader instance (along with the WeightedRandomSampler instance). I set shuffle=False for the data loader as well.

From the instructions, I originally thought that the weights were just the weights associated with each class (22 numbers). I also set the sample_size to the batch_size (8). When I iterated through the data loader and calculated the total number of images selected per class, I got garbage:
[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
Where the numbers represent the number of images selected per class. In other words, only one image was selected out of 1901 images.

I then assigned the weights to the class weights for all targets (1901 numbers). The batch size is still 8. I got something that looked more reasonable:
[85 76 63 75 87 88 80 76 77 76 86 69 79 71 66 57 85 69 75 70 69 60]
Although it looks reasonable, I think that there is something wrong because these numbers only sum to 1639, which means that about 14% of the images are being overlooked. I realize that there may be a little slop in the total number of selected images because the sampler is sampling images at random, but a 14% discrepancy seems large.

If I do the same test with the batch_size=1, I get:
[ 95 80 92 77 81 85 83 78 105 89 84 85 91 86 79 86 101 97 100 75 67 85]
These numbers add to 1901, which is what I expected.

I would like to get 1901 images (or a number close to it) selected with a batch_size other than 1. Is this possible? Am I doing something wrong?