I think it would be a known issue but I cannot find the correct way in the forum…
I have some VMs on the server and want to broadcast torch.tensor product by mpi4py with the following snippet but without success.
if rank == 0:
x = torch.tensor([1], dtype=torch.float64)
if rank != 0:
x = None
x = comm.bcast(x, root=0)
print("Rank {} recieves {}".format(rank, x)))
How should I broadcast tensors between several ranks?