How to speed up the data loader

This worked for me. Thanks. hdf “file opening has to happen inside of the __getitem__ function of the Dataset wrapper.” -

1 Like

I wrote the code of prefetching and I confirmed that it improves the performance of data loader.
My code is based on the implementation here:

However, if you run the program on your local machine, I highly recommend buying a NVMe drive (e.g., This investment completely solves the problem of slow image loading.

1 Like

so, the solution is employing the DALI and then change:
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
normalize = transforms.Normalize(mean=[0.485*255, 0.456*255, 0.406*255], std=[0.229*255, 0.224*255, 0.225*255])


Is DALI helpful in such cases?

If you have any questions or request feel free to drop them directly in
Sorry, but we are not able to track all other forum threads about DALI, while we doing our best to be responsive on the GitHub.

A noticeable speedup with h5py would be seen only when h5 file is written without the chunked option.

1 Like

Hi @Hou_Qiqi, I saw you had similar problem that want the dataloader to prefetch data while training ongoing, basically let GPU training and CPU dataloader run in parallel.

Here is our code

for fi, batch in enumerate(my_data_loader):

and in our dataloader, we have define some collate_fn to cook_data


we observed it seems GPU needs to block waiting the dataloader to process, is there a way to prefetch as you mentioned? if we use a Map style dataset, not iterative, dose it work?

I don’t recommend solution 1. Because .bmp is dramatically storage-cusuming (80 x origin image in my case). And can you explain more how to use solution 2?