I am using minibatch of size 5, and my data is parallel sentences (for instance for machine translation). I have tried different number of workers for the dataloader (from 0 to 32) and it seems it does not affect the speed of each training epoch. I am using Pytorch 0.4.1 with python 3.5.5.
Is it reasonable or there is a problem?
That just means that the data-loading part of your program is not significant compared to the other computational components. So, increasing the number of workers will not affect the overall computation time.
1 Like