Batch loader can give batch size of 1 on new epochs


(Marc) #1

I am using a data loader to load batches of data into a DataParallel network. The issue is that my network uses batch normalization and batchnorm errors out when batch size is 1. But with 3 GPUs and a batch size of 15, I have an incomplete batch as the epoch change. This often leads at least one GPU with a batch size of 1. I have tried multiple variations of batch sizes, but the dataset size keeps giving me this problem. I don’t know how to resolve this using the automatic batching in the data loader. Is there an option to pad that last batch with all zeros or something?


(Simon Wang) #2

you can manually pad the last batch or just drop it using drop_last=True