Error while Multiprocessing in Dataloader

have you found a solution?

I am using num_workers with IterableDataset and it also has this problem.

3 Likes

yes…
same here.

testset = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True,num_workers=0)

But i just worried that it is possible to use only my local testing.
So I just want to know what is root cause and a solution.

^^

did you solve it. Or it is the problem of num_workers.

1 Like

Well I am getting the same error, it says can only join a child process. I do not know what that means??

1 Like

I was having this issue. Turns out its because there was an error in the dataset object (for me it was in the __getitem__ function). I guess the DataLoader in multiprocessing mode doesn’t know how to cleanly provide you with the internal error message. If you have the same problem, try running with num_workers = 0 (single-threaded) and it should tell you what the error is. Once you’ve fixed the error, it should work with num_workers > 0.

6 Likes

same issue happens. Not know where is the error.

I just got this error. My data is good. And training with num_workers=0 is too slow. For whatever reason, I was able to fix by replacing
from tqdm.auto import tqdm
with just
from tqdm import tqdm
Something seems to bug out with parallel dataloaders wrapped around the fancy notebook tqdm with my versions of nodejs and ipywidgets. Hope this helps others.

23 Likes

This solve my problem.
Thanks a lot!

This is literally gold!

This solves my problem, too.

but why???

This helped! Thank you!

I’m not even using tqdm and my code works fine with num_workers=0. What could be the problem?

1 Like

Same here, no tqdm, code worked with num_workers=0, 1, 2, but saw a lot of these errors when num_worers>=3.
I ran the code inside docker and increasing the shared memory size (–shm-size 256M → 1G) solved the problem for me, now works fine with num_workers=12.

That works for me. Thanks!

Not using tqdm but changing num_workers from 1 to 0 caused this error to go away on my Colab run! :slight_smile:

2 Likes

that really helps by replacing
from tqdm.auto import tqdm
with just
from tqdm import tqdm

Thx a lot!

I just changed the num_workers to num_workers=0, run the training once and then change it back again to num_workers=4. The error just disappeared afterwards

The warnings were annoying me a lot. Thanks :))

FWIW - if using pytorch-lightning the suggested import solution did not help. In my case, a custom data set was generating ALL data on the fly. Altering it to pregenerate substantial data caused the issue to go away, even with non-zero workers.