Cuda runtime error (801) multiple workers

I have a simple setup where I want to employ multiple workers in a dataloader. A sample code looks like:

import torch
import torch.utils.data as Data
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

X = torch.rand((10000,200)).to(device)
y = torch.rand((10000,1)).to(device)


dataset = Data.TensorDataset(X,y)

loader = Data.DataLoader(
    dataset=dataset, 
    batch_size=20,  
    shuffle=True, num_workers=2)


for i,(X_batch,y_batch) in enumerate(loader):
    continue

But I get error message:

RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_100118\conda\conda-bld\pytorch_1579082551706\work\torch/csrc/generic/StorageSharing.cpp:245

Can anyone help me understand this problem?
(It only happens for num_workers >0)

You are most likely recreating the CUDA context and would have to change the startmethod as described here.

Hi,

I am getting the same error and I am using set_start_method(‘spawn’).

Could you please help a check what is wrong with this code.