How to speed up the data loader

Got it. Thank you~

Once I save preprocessed images as .pt format. It will load the tensor directly. How can I do random crop and resize? Need I convert them back to PIL.Image?

Thanks

I hadn’t thought of that problem… I can think of two approaches but I can’t tell you which will work the fastest.

  1. Convert each image into .bmp format instead of .jpg. Then use your original loader to load the .bmp format which will decompress much faster than .jpg.
  2. Use torchvision.transforms.ToPILImage which I think should run pretty fast.
2 Likes

Thank you.

I have implemented dataset_h5. It runs quite well when I set the “num_workers” as 1 or 2. I meet some problem when I set “num_worker” bigger than 2. It seems related to the h5py version. My h5py now is 2.7.1.

File “/home/titan/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 281, in next
return self._process_next_batch(batch)
File “/home/titan/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 301, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
KeyError: ‘Traceback (most recent call last):\n File “/home/titan/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 55, in _worker_loop\n samples = collate_fn([dataset[i] for i in batch_indices])\n File “/home/titan/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 55, in \n samples = collate_fn([dataset[i] for i in batch_indices])\n File “/home/titan/code/res-pytorch/AdobeData/AdobeData.py”, line 595, in getitem\n fgimg = self.fgfile['img'][index, …]\n File “h5py/_objects.pyx”, line 54, in h5py._objects.with_phil.wrapper\n File “h5py/_objects.pyx”, line 55, in h5py._objects.with_phil.wrapper\n File “/home/titan/anaconda3/lib/python3.6/site-packages/h5py/_hl/group.py”, line 167, in getitem\n oid = h5o.open(self.id, self._e(name), lapl=self._lapl)\n File “h5py/_objects.pyx”, line 54, in h5py._objects.with_phil.wrapper\n File “h5py/_objects.pyx”, line 55, in h5py._objects.with_phil.wrapper\n File “h5py/h5o.pyx”, line 190, in h5py.h5o.open\nKeyError: 'Unable to open object (bad object header version number)'\n’

My codes are like

class AdobePatchDataHDF5(data.Dataset):

def __init__(self, root, cropsize = 256, outputsize = 256):

    fgfile = h5py.File(root, 'r')

    self.root = root
    self.fgfile = fgfile
    self.cropsize = cropsize
    self.outputsize = outputsize


def __getitem__(self, index):
    # read image
    fgimg = self.fgfile['img'][index, ...]

    # random crop and resize, random flip with cv2

    # toTensors
    fgimg = fgimg.astype(np.float32) / 255.0 
    fgimg = torch.from_numpy(fg.transpose((2, 0, 1)))

    # norm [0, 1] to [-1, 1]

    return fgimg, label

def __len__(self):
    return self.fgfile['img'].shape[0]

My train loader is like

    train_loader = torch.utils.data.DataLoader(
        AdobePatchDataHDF5(path), batch_size=256, shuffle= True
        num_workers=8, pin_memory=True, sampler=None)
3 Likes

What if you use the swmr=True (single write multiple reads) option when opening the HDF5 file?

2 Likes

Yeah. With swmr=True, it still gets the same bug.

1 Like

It looks like HDF5 has some concurrency issues. My suggestion of using it is probably not appropriate when you use several workers. I often use one worker because my networks are computationally heavy and I’m not limited by the data iterator. Perhaps you should try other approaches like zarr (http://zarr.readthedocs.io/en/stable/) which have been designed to be thread-safe.

2 Likes

I’m not sure if the problem is same like my, but I’ve problem with reading a very big images (3k by 3k).
If this is your case, it is my advice:

  1. Can you divide images to smaller sub-region, ex. 1k vs 1k? If yes, just make some crops of images to make it smaller. Then reading speed should be much faster.
  2. You can use jpeg4py, library dedicated to encode big jpeg files much faster than PIL. Just read image using this library, then transform it to PIL.
  3. The fastest available option, which I found is using jpeg4py library together with OpenCV data-augmentation (so no PIL image). I used OpenCV technique from this pull request.
3 Likes

After reading the some codes of torch.utils.data.dataloader, I find it does not work like caffe, which prefetches next batch data during the GPUs are working. I find a blog which tries to do it. I will try it.

######################################################

After I read the blog, I made a mistake which the dataloader tries to prefetch the next batch data . But I find that it can NOT make full use of CPUs (it shows that it only uses about 60% ability of CPU). In the blog it shows that the data precoessing takes less than 18% of time. Actually, if it could fully achieve its target and made full use of CPUs and the disk is fast enough, it should be near 0%, not 18%.

1 Like

@Hou_Qiqi
have you finally fixed this annoying problem of “in h5py.h5o.open\nKeyError: ‘Unable to open object (bad object header version number)’\n’” ?

I have the same issue with you when I set num_workers greater than 2.

@taiky
Sorry, I don’t fix this bug. Now I write another process that prepares data on the fly when the gpus are runing. Hope that can help you.

Check out a potential solution to HDF5 dataloader concurrency issues: https://stackoverflow.com/a/52249344/411907

1 Like

For anyone reading this, Nvidia DALI is a great solution:

It’s got simple to use Pytorch integration.

I was running into the same problems with the pytorch dataloader. On ImageNet, I couldn’t seem to get above about 250 images/sec. On a Google cloud instance with 12 cores & a V100, I could get just over 2000 images/sec with DALI. However in cases where the dataloader isn’t the bottleneck, I found that using DALI would impact performance 5-10%. This makes sense I think, as you’re using the GPU to some of the decoding & preprocessing

Edit: Dali also has a CPU only mode, meaning no GPU performance hit

5 Likes

@Hou_Qiqi Were you able to speed up your dataloader? Did you try preparing the data while GPUs are running?

Just found this thread a few days ago and implemented NVIDIA DALI to load my data while doing some transfer learning with Alexnet.

On an AWS p2.x8large instance (8 Tesla K80 GPUs), using DALI speeds up my epoch time from 480 s / epoch to 10 s / epoch. No need for any code that explicitly prepares data while the GPUs are running. One important thing to note is that if you’re using a DALI dataloader from an external source (i.e. if you have your image classes grouped by folder), you have to manually reset the dataloader using loader_name.reset() at the end of every training epoch. That’s not how the pytorch dataloader works, so it took me a while to realize that was what was going on here.

The only irritating thing I’ve found about DALI is that there is no immediately obvious way (to me, anyway) to convert pixel values from uint8 with a 0-255 range to float with a 0-1 range, which is needed for transfer learning with pytorch’s pretrained models. Dividing by 255.0 within a pipeline runs into data type mismatch, the ops.Cast() function only converts to float but doesn’t rescale, and all the various flavors of normalize functions don’t allow for it either. The only way I was able to do it was by manually scaling the mean and std values given by pytorch.

Other than that, agree that the pytorch integration is simple and fairly clean.

6 Likes

Hi, I got the same problem. You can try to clear your cache to solve the problem and try many times.

Hi, I also meet with this problerm, do you have sovle it ?

I want to train resnet50 on ImageNet, but the data loading is a bottleneck.

do you try the NVIDIA-DALI, resnet50-by-dali ??

:triumph::triumph::triumph::triumph:

I am facing the similar problem. However, I am more interested in knowing how to make my dataloader efficient.
Ideally, I want to read multiple temporal blocks for a video, perform transformations like random crop, rescale, extract I3D features for them, concat them in a tensor and return with other information like text embeddings for question, multiple answers.

For now, I am preprocessing my videos and extracting features beforehand, which does not allow me to use multiple random crops and not a good way. The video frames are 60GB size in total. Any tips?

I am using Ubuntu 16.04, 24gb RAM, 245gb/959gb disk space is free right now, 1 Titan Xp GPU, Python 3.6.4, Pytorch 0.4.1, cuda 8.0.61, cudnn 7102.
Dataset is on local disk.

1 Like

This worked for me. Thanks. hdf “file opening has to happen inside of the __getitem__ function of the Dataset wrapper.” - https://stackoverflow.com/questions/46045512/h5py-hdf5-database-randomly-returning-nans-and-near-very-small-data-with-multi/52249344#52249344

1 Like

I wrote the code of prefetching and I confirmed that it improves the performance of data loader.
My code is based on the implementation here: https://github.com/NVIDIA/apex/blob/f5cd5ae937f168c763985f627bbf850648ea5f3f/examples/imagenet/main_amp.py#L256

However, if you run the program on your local machine, I highly recommend buying a NVMe drive (e.g., https://www.amazon.com/Samsung-950-PRO-Internal-MZ-V5P256BW-x/dp/B015SOI392). This investment completely solves the problem of slow image loading.

1 Like

so, the solution is employing the DALI and then change:
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
to:
normalize = transforms.Normalize(mean=[0.485*255, 0.456*255, 0.406*255], std=[0.229*255, 0.224*255, 0.225*255])

?