Problem with multiprocessing with GPU

Whenever I try and use multiprocessing with my device as a gpu, I get this error.

THCudaCheck FAIL file=C:\w\b\windows\pytorch\torch/csrc/generic/StorageSharing.cpp line=245 error=801 : operation not supported
Traceback (most recent call last):
File “c:/Users/chalk/OneDrive/Documents/Monash Uni/FYP/Rubik’s Cube/”, line 103, in
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\”, line 112, in start
self._popen = self._Popen(self)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\”, line 322, in _Popen
return Popen(process_obj)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\”, line 89, in init
reduction.dump(process_obj, to_child)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File “C:\Users\chalk\Anaconda3\lib\site-packages\torch\multiprocessing\”, line 242, in reduce_tensor
event_sync_required) = storage.share_cuda()
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\b\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245
(base) PS C:\Users\chalk\OneDrive\Documents\Monash Uni\FYP\Rubik’s Cube> Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\”, line 105, in spawn_main
exitcode = _main(fd)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\”, line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

For example if I run the example code found here with cuda enabled, I get the error.

Also, whenever I set num_workers > 0, I also get an error.

What’s going on here?

Could you try to add the if-clause protection as explained in the Windows FAQ?

Do you mean to add the freeze_support() line? I just tried that and it didn’t change anything. If you mean putting the if name == “main” line I already had that, as could be seen in the example code.

Looking at this link, does this literally mean its impossible to set num_workers on Windows? Also with the sharing CPU tensors, I have set that, but still can’t share the model thats on the GPU.