Cannot use tensor in dataloader, RuntimeError: Cannot re-initialize CUDA in forked subprocess

I save a tensor using torch.save(Tensor, A.pth) and load it in my dataloader process via T = torch.load("A.pth")
Then i cahnge the tensor to numpy via T_Numpy =T.data.cpu().numpy(),
Why am I getting the following error:

    "Cannot re-initialize CUDA in forked subprocess. " + msg)
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method

How I can fix it? :frowning:

P.S. I am using Dataparallel and im in pytorch 0.3.0

I figured out if I save the tensors withough variable meaning that use Tensor.data for saving that problem will disappears :smiley:

Since I had this problem but this solution did not work for me (possibly because Variable usage is now deprecated) the solution was to load the tensor onto the cpu via the map_location argument. When using dataloader with torch.multiprocessing, loading the tensor to gpu (which was being done automatically) caused this issue. Testing without torch.multiprocessing and this error did not occur.