In the multi-GPU dataloader, if I set drop_last=False and the last batch of data cannot be evenly distributed to each GPU, what will PyTorch do?
I understand, thank you!
In the multi-GPU dataloader, if I set drop_last=False and the last batch of data cannot be evenly distributed to each GPU, what will PyTorch do?
I understand, thank you!