Iterating over multiple dataloader


I am trying to do a code that iterate over multiple dataloaders. Not just two (train and val, but 500 dataloaders)
I am iterating over a dataset, and on each data of the dataset I extract crop and apply a CNN on the crop.

I wonder if there is a way to purge the dataloader after each iteration since it uses all the resources of my computer and block it.
And then I have the following message

Exception ignored in: <bound method _DataLoaderIter.del of < object at 0x7f7f6cc15160>>
Traceback (most recent call last):
File “/home/gianni/pytorch_venv/lib/python3.5/site-packages/torch/utils/data/”, line 399, in del
File “/home/gianni/pytorch_venv/lib/python3.5/site-packages/torch/utils/data/”, line 378, in _shutdown_workers
File “/usr/lib/python3.5/multiprocessing/”, line 345, in get
return ForkingPickler.loads(res)
File “/home/gianni/pytorch_venv/lib/python3.5/site-packages/torch/multiprocessing/”, line 151, in rebuild_storage_fd
fd = df.detach()
File “/usr/lib/python3.5/multiprocessing/”, line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File “/usr/lib/python3.5/multiprocessing/”, line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File “/usr/lib/python3.5/multiprocessing/”, line 487, in Client
c = SocketClient(address)
File “/usr/lib/python3.5/multiprocessing/”, line 614, in SocketClient
ConnectionRefusedError: [Errno 111] Connection refused

I saw that people have the same issue :

but that is not exactly the same.