Using dataloader RecursionError: maximum recursion depth exceeded while calling a Python object

Hi, I’m using dataloader with 60060 images, with batch size set to 5, so the length of my dataloader is 12012.
I got the error :

RecursionError: maximum recursion depth exceeded while calling a Python object

What should I do?
I suppose it’s because the image’s number is too big, so how to divide the dataloder or make it work little by little?
Thank you in advance

More detailed traceback:

Traceback (most recent call last):
  File "Training.py", line 141, in <module>
    for i_batch, sample_batched in enumerate(dataloader):
  File "/home/xiaoyu/miniconda3/envs/deep_mol/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 637, in __next__
    return self._process_next_batch(batch)
  File "/home/xiaoyu/miniconda3/envs/deep_mol/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 658, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
RecursionError: Traceback (most recent call last):
  File "/home/xiaoyu/miniconda3/envs/deep_mol/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "/home/xiaoyu/miniconda3/envs/deep_mol/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in <listcomp>
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "/home/xiaoyu/miniconda3/envs/deep_mol/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 81, in __getitem__
    return self.datasets[dataset_idx][sample_idx]
  File "/home/xiaoyu/miniconda3/envs/deep_mol/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 81, in __getitem__
    return self.datasets[dataset_idx][sample_idx]
  File "/home/xiaoyu/miniconda3/envs/deep_mol/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 81, in __getitem__
    return self.datasets[dataset_idx][sample_idx]
  [Previous line repeated 321 more times]
  File "Training.py", line 67, in __getitem__
    T1a_arr = io.imread(os.path.join(T1a_dir, T1a_str))
  File "/home/xiaoyu/miniconda3/envs/deep_mol/lib/python3.6/site-packages/skimage/io/_io.py", line 61, in imread
    with file_or_url_context(fname) as fname:
  File "/home/xiaoyu/miniconda3/envs/deep_mol/lib/python3.6/contextlib.py", line 159, in helper
    return _GeneratorContextManager(func, args, kwds)
RecursionError: maximum recursion depth exceeded while calling a Python object

I think there is something wrong with the __getitem__ so I find this post, which suggests that instead of using the recursion, we could add a _name

Try setting the sys.getrecursionlimit() to a higher value.

1 Like

Thank you, I set sys.setrecursionlimit(10000) and the traceback disappear :smile:

It looks like you have a recursive method in your Dataset.
Is it on purpose? If not or you’re unsure, would you mind posting the Dataset code so that we could have a look?

Hi, ptrblck, this is my dataset

class TrainDataset(Dataset):
    """Training dataset with mask image mapping to classes"""
    def __init__(self, T1a_dir, parc5a_dir, transform=None):
        """
        Args:
            T1a_dir (string): Directory with T1w image in axial plane
            transform (callable): Optional transform to be applied on a sample
            parc5a_dir (string): Directory with parcellation scale 5 in axial plane
        """
        self._T1a_dir = T1a_dir
        self.transform = transform
        self._parc5a_dir = parc5a_dir
        
    def __len__(self):
        T1a_list = os.listdir(self._T1a_dir)
        return len(T1a_list)
    
    
    def __getitem__(self, idx):
        T1a_list = os.listdir(self._T1a_dir)
        parc5a_list = os.listdir(self._parc5a_dir)
        
        T1a_str = T1a_list[idx]
        
        T1a_arr = io.imread(os.path.join(self._T1a_dir, T1a_str))
        T1a_tensor = torch.from_numpy(T1a_arr)
        
        compose_T1 = transforms.Compose([transforms.ToPILImage(), 
                                         transforms.Resize((128,128),interpolation=Image.NEAREST),
                                         transforms.ToTensor(),
                                         transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
        T1a_tensor = torch.unsqueeze(T1a_tensor, dim = 0)
        T1a_tensor = compose_T1(T1a_tensor)
              
        parc5a_str = parc5a_list[idx]
    
        parc5a_arr = io.imread(os.path.join(self._parc5a_dir, parc5a_str))
        parc5a_tensor = torch.from_numpy(parc5a_arr)
        
        compose = transforms.Compose([transforms.ToPILImage(),
                                      transforms.Resize((128,128),interpolation=Image.NEAREST), 
                                      transforms.ToTensor()])
        
        parc5a_tensor = torch.unsqueeze(parc5a_tensor, dim = 0)
        parc5a_tensor = compose(parc5a_tensor)
        parc5a_tensor = parc5a_tensor.squeeze()
        
        parc5a_tensor = torch.round(parc5a_tensor / 0.0039).byte()
      
        sample = {'T1a':T1a_tensor, 'parc5a':parc5a_tensor}
        
        if self.transform:
            T1a = self.transform(T1a_tensor)
            sample = {'T1a':T1a, 'parc5a':parc5a}
            
        return sample

Hi…I got the exact same error…I tried with sys.setrecursionlimit(10000) but error is still there… @Xiaoyu_Song @pramod.srinivasan can you suggest me something?