Is MPI backend still supported?

Hi
I found version v0.4.1 is released and there is a check in torch/nn/parallel/distributed.py

if dist._backend not in (dist.dist_backend.NCCL, dist.dist_backend.GLOO):
raise ValueError(‘Invalid backend, only NCCL and GLOO backends are supported by DistributedDataParallel’)

Does this mean MPI is not supported anymore?

This is for the DistributedDataParallel module which supports NCCL and GLOO backends and we have DistributedDataParallelCPU which supports MPI in addition to these two.

Hi Deepali,
@Deepali Thanks for your reply. Does DistributedDataParallelCPU only work for modules with all variables on CPU as its name implied? Or we can also use it for the network resides on GPU? I’m asking so because in the previous version DistributedDataParallel also works for GPU network + MPI backend.