[DataParallel in Window Machine] PyTorch is not compiled with NCCL support warnings.warn('PyTorch is not compiled with NCCL support')

I’m using DataParallel in my window 10 machine, with pytorch version of 1.10.
Although the code works, I get this message:

UserWarning: PyTorch is not compiled with NCCL support
warnings.warn(‘PyTorch is not compiled with NCCL support’)

I learned that window doesn’t support, NCCL, but I’m not sure what does it imply.

A. Doesn’t really affect the performance.
B. It works, but it’s not efficient on window machine (maybe slow, or memory inefficient?)
C. It’s not actually doing the task in parallel. (e.g. work in sequential, gpu0->gpu1->gpu2…)

Depending on the answer, is there a recommended solution or getaround?

Thanks for reading!

Hi, you may refer to the Note here:

As of PyTorch v1.8, Windows supports all collective communications backend but NCCL.

Hence I believe you can still have torch.distributed working, just without the performance brought by NCCL.

If in doubt, you can use the torch.distributed.is_available() API to see if it works on your platform.

I’m not using distributed, I’m just using DataParallel and I get this message.
For example in the code, I do not ask to use “nccl”

Have you tried using v1.11? That warning message won’t have any performance impact on your training, but I can’t find any place in our latest code base where we call nccl.is_available() in DataParallel (which ultimately outputs that warning).