Whenever I try and use multiprocessing with my device as a gpu, I get this error.
THCudaCheck FAIL file=C:\w\b\windows\pytorch\torch/csrc/generic/StorageSharing.cpp line=245 error=801 : operation not supported
Traceback (most recent call last):
File “c:/Users/chalk/OneDrive/Documents/Monash Uni/FYP/Rubik’s Cube/test2.py”, line 103, in
p.start()
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\process.py”, line 112, in start
self._popen = self._Popen(self)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\context.py”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\context.py”, line 322, in _Popen
return Popen(process_obj)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\popen_spawn_win32.py”, line 89, in init
reduction.dump(process_obj, to_child)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File “C:\Users\chalk\Anaconda3\lib\site-packages\torch\multiprocessing\reductions.py”, line 242, in reduce_tensor
event_sync_required) = storage.share_cuda()
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\b\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245
(base) PS C:\Users\chalk\OneDrive\Documents\Monash Uni\FYP\Rubik’s Cube> Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\spawn.py”, line 105, in spawn_main
exitcode = _main(fd)
File “C:\Users\chalk\Anaconda3\lib\multiprocessing\spawn.py”, line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
For example if I run the example code found here with cuda enabled, https://github.com/pytorch/examples/blob/master/mnist_hogwild/train.py I get the error.
Also, whenever I set num_workers > 0, I also get an error.
What’s going on here?