I’m using DataParallel in my window 10 machine, with pytorch version of 1.10.
Although the code works, I get this message:
UserWarning: PyTorch is not compiled with NCCL support
warnings.warn(‘PyTorch is not compiled with NCCL support’)
I learned that window doesn’t support, NCCL, but I’m not sure what does it imply.
A. Doesn’t really affect the performance.
B. It works, but it’s not efficient on window machine (maybe slow, or memory inefficient?)
C. It’s not actually doing the task in parallel. (e.g. work in sequential, gpu0->gpu1->gpu2…)
Depending on the answer, is there a recommended solution or getaround?
Have you tried using v1.11? That warning message won’t have any performance impact on your training, but I can’t find any place in our latest code base where we call nccl.is_available() in DataParallel (which ultimately outputs that warning).