A call to torch.cuda.is_available makes an unrelated multi-processing computation crash?

when using multiprocessing with CUDA, it is important to use the spawn method instead of the default fork method.
http://pytorch.org/docs/notes/multiprocessing.html#sharing-cuda-tensors

import torch.multiprocessing as multiprocessing
multiprocessing.set_start_method('spawn')

This is a restriction of CUDA/NVIDIA.

2 Likes