Coordinating cpu and gpu's for transforms and training

Hi,
How would I accomplish the following?

Say I want to have a portion of the CPU (or the entire CPU, I’m using an AWS P2 instance which has a multi-core CPU) to perform image transformation on the next batch while the GPU does the backprop/updates on the current batch. I’m not that familiar with parallel processing in python, so I’m not sure where to start on this. The goal is to decrease the amount of time spent on augmentation by using the CPU to generate the augmentations during its off-peak phases in the training cycle.

Thank you!

Austin

DataLoader has a num_workers argument to do that: http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader

1 Like

Would you mind giving a short instruction on how I would specify the gpu/cpu split with data loader num workers?