How do I reduce the tensors in 4 diffrent GPUs using 2 processes?

import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process

def run(rank, size):
    tensor1 = torch.ones(1, device=2*rank)
    tensor2 = torch.ones(1, device=2*rank+1)

    dist.all_reduce(tensor1, op=dist.ReduceOp.SUM)
    dist.all_reduce(tensor2, op=dist.ReduceOp.SUM)
    print('Rank ', rank, ' has data ', tensor1[0], tensor2[0])

def init_process(rank, size, fn, backend='nccl'):
    os.environ['MASTER_ADDR'] = '127.0.0.1'
    os.environ['MASTER_PORT'] = '29501'
    dist.init_process_group(backend, rank=rank, world_size=size)
    fn(rank, size)

if __name__ == "__main__":
    size = 2
    processes = []
    for rank in range(size):
        p = Process(target=init_process, args=(rank, size, run))
        p.start()
        processes.append(p)

    for p in processes:
        p.join()

I have a computer with four GPUs,
GPU0: tensor1, GPU1: tensor2
GPU2: tensor1, GPU3: tensor2
I expect to reduce tensor1 in GPU0 and GPU2, reduce tensor2 in GPU1 and GPU3. But when I execute the above code, the program keeps blocking.
I don’t know why. Can someone help me, thanks!

Unfortunately I couldn’t reproduce your problem. This is the output of your script when I run it on a 4 GPU machine:

WARNING: Logging before InitGoogleLogging() is written to STDERR
I1025 17:28:00.405782 39617 ProcessGroupNCCL.cpp:520] [Rank 0] ProcessGroupNCCL initialized with following options:
NCCL_ASYNC_ERROR_HANDLING: 0
NCCL_BLOCKING_WAIT: 0
TIMEOUT(ms): 1800000
USE_HIGH_PRIORITY_STREAM: 0
NCCL_DEBUG: UNSET
I1025 17:28:00.405783 39637 ProcessGroupNCCL.cpp:621] [Rank 0] NCCL watchdog thread started!
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1025 17:28:00.422144 39618 ProcessGroupNCCL.cpp:520] [Rank 1] ProcessGroupNCCL initialized with following options:
NCCL_ASYNC_ERROR_HANDLING: 0
NCCL_BLOCKING_WAIT: 0
TIMEOUT(ms): 1800000
USE_HIGH_PRIORITY_STREAM: 0
NCCL_DEBUG: UNSET
I1025 17:28:00.422158 39638 ProcessGroupNCCL.cpp:621] [Rank 1] NCCL watchdog thread started!
Rank  1  has data  tensor(2., device='cuda:2') tensor(2., device='cuda:3')
Rank  0  has data  tensor(2., device='cuda:0') tensor(2., device='cuda:1')

Are you sure that your script is able to see all your CUDA devices (e.g. CUDA_VISIBLE_DEVICES=0,1,2,3)?

But my results are as follows:


My result just keeps blocking.

GPU Usage Details:


Feels like a weird GPU usage

My configuration:
pytorch1.7.1+cu101
cuda10.0.130
GPU:GeForce RTX 2080Ti