Not sure if this is reported already but I am getting the following Assertion error in Dataloader
Exception ignored in: <bound method _DataLoaderIter.del of <torch.utils.data.dataloader._DataLoaderIter object at 0x7fae94071d30>>
Traceback (most recent call last):
File “/home/amit/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 677, in del
self._shutdown_workers()
File “/home/amit/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 659, in _shutdown_workers
w.join()
File “/usr/lib/python3.6/multiprocessing/process.py”, line 122, in join
assert self._parent_pid == os.getpid(), ‘can only join a child process’
AssertionError: can only join a child process
I was having this issue. Turns out its because there was an error in the dataset object (for me it was in the __getitem__ function). I guess the DataLoader in multiprocessing mode doesn’t know how to cleanly provide you with the internal error message. If you have the same problem, try running with num_workers = 0 (single-threaded) and it should tell you what the error is. Once you’ve fixed the error, it should work with num_workers > 0.
I just got this error. My data is good. And training with num_workers=0 is too slow. For whatever reason, I was able to fix by replacing from tqdm.auto import tqdm
with just from tqdm import tqdm
Something seems to bug out with parallel dataloaders wrapped around the fancy notebook tqdm with my versions of nodejs and ipywidgets. Hope this helps others.
Same here, no tqdm, code worked with num_workers=0, 1, 2, but saw a lot of these errors when num_worers>=3.
I ran the code inside docker and increasing the shared memory size (–shm-size 256M → 1G) solved the problem for me, now works fine with num_workers=12.
I just changed the num_workers to num_workers=0, run the training once and then change it back again to num_workers=4. The error just disappeared afterwards