Guidelines for assigning num_workers to DataLoader

Sure, it’s possible but you might consider a few shortcomings.
If you are dealing with a (preprocessed) array / tensor, you could simply load it, push to the device and index it to create batches. A DataLoader might be used, but e.g. multiple workers most likely won’t help much speeding up your data pipeline, as the data is already on the GPU.

If you want to apply some data augmentation methods, you would need to apply them on the GPU. Since a lot of torchvision transformations are written using PIL, you would have to use another library or implement it manually.

Also note, that your data will use memory on your device, which cannot be used by the model anymore.

5 Likes