Dataloader Error

I am using the dataloader class to load files and then sending them to a GPU (Nvidia K80). The Intel processor has the Haswell architecture.

image_loader = torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=True, num_workers=4)
for x in enumerate(image_loader, 0):
    x = x.to(device)

However, this gives the error:

RuntimeError: could not unlink the shared memory file /torch_18607_1504223053

What could the issue be? It does not appear that any files (paths to images) are duplicated in my dataset, which would cause the dataloader to load the same image simultaneously in two parallel processes.

Is your code running fine with num_workers=0?
The error message points to a failing exit call, so it might be a red herring and your code might crash before the unlike error is raised.

I have not observed the error with num_workers=0. However, the error doesn’t always appear when running with num_workers=4 either. My guess is it is due to the dataloader shuffling the objects being loaded and one specific combination of objects not working together. Is this a possibility?

Also, are you referring to an exit call by the dataloader workers? And what is the unlike error?

Are you processing multiple samples inside your Dataset's __getitem__ method?
If you are loading and processing single samples, I don’t think the error might be related to the shuffling, but that’s of course just a guess.

“unlike” was the autocorrected version of unlink. :wink: