CUDA initialization error when DataLoader with CUDA Tensor

If you would like to use multiple workers in your DataLoader, pass the data tensor as a CPU tensor to TensorDataset and push each batch to the GPU using:

ds = TensorDataset(torch.from_numpy(data))
dl = DataLoader(ds, batch_size=1, num_workers=1, shuffle=True)
for x in dl:
    x = x.to('cuda', non_blocking=True)

Otherwise multiple CUDA contexts will be initialized yielding your error.

8 Likes