Custom Dataset error: unable to mmap memory: you tried to mmap 0GB

my code is

class dogloader(Dataset):

    def __init__(self, img, label, transform = None):
        self.img = img; self.label = label
        self.transform = transform

    def __len__(self):
        return len(self.label)

    def __getitem__(self, idx):
        img = Image.open(self.img[idx]).convert('RGB')
        print(img.size)
        if self.transform is not None:
            img = self.transform(img)
        label = torch.from_numpy(np.array(self.label[idx]))
        # print(idx)
        return img, label

and error is

Traceback (most recent call last):
  File "torch_test.py", line 31, in <module>
    for batch_idx, (data, target) in enumerate(dataloader):
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 212, in __next__
    return self._process_next_batch(batch)
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 239, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 41, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 110, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 90, in default_collate
    storage = batch[0].storage()._new_shared(numel)
  File "/usr/local/lib/python2.7/dist-packages/torch/storage.py", line 113, in _new_shared
    return cls._new_using_fd(size)
RuntimeError: $ Torch: unable to mmap memory: you tried to mmap 0GB. at /b/wheel/pytorch-src/torch/lib/TH/THAllocator.c:317

Seems like the error is occurring outside fo the code you provide? on line 31 in facdt?

Related discussions:

the outside code, is just a loop, no wrong.