Problem with batch size

Hey guys,

I am facing a quite strange error, which I have not encountered before.

I have a data loader with batches of size 256. My training begins and everything is fine, meaning that I have the correct sizes for batch_x and and batch_y, being [256, 1, 28, 28] and [256] respectively.

Then something happens and these sizes become [96, 1, 28, 28] and [96].

Any ideas how this is even possible?

Hard to say without seeing your dataset class.

But are you sure its not just the dataloader working as expected? If this happens at the end of the batch calls, then I would assume so.

Basically when the length of your whole dataset is not a hard multiple of the batch_size it means that when it comes to the last batch to load, you either have to load a smaller batch or drop it, using the drop_last argument in the dataloader.

https://pytorch.org/docs/stable/data.html

So if for example the training set was size 10 and the batch size was 3 you would have:

first batch  = 3
second batch = 3
third batch  = 3
fourth batch = 1