For anyone reading this, Nvidia DALI is a great solution:
It’s got simple to use Pytorch integration.
I was running into the same problems with the pytorch dataloader. On ImageNet, I couldn’t seem to get above about 250 images/sec. On a Google cloud instance with 12 cores & a V100, I could get just over 2000 images/sec with DALI. However in cases where the dataloader isn’t the bottleneck, I found that using DALI would impact performance 5-10%. This makes sense I think, as you’re using the GPU to some of the decoding & preprocessing
Edit: Dali also has a CPU only mode, meaning no GPU performance hit