Is torch.distributed.all_reduce implemented with Ring-AllReduce or Tree-based AllReduce, or others?

I am using Gloo as the backend for distributed machine learning. I am curious about the implementation of torch.distributed.all_reduce in detail. Currently the official documentation does not talk about it. I wonder whether it is a ring-based all-reduce or tree-based all-reduce?

6 Likes

Hi, I have the same problem. Could anyone answer this problem?