Torch.distributed disabling connection between workers

I want to create a hub-spoke topology where only the master is connected to the workers. All communication is point-to-point (using send, recv, or isend, irecv calls). Is it possible to prevent the workers (rank > 0 processes) from connecting with each other in the torch.distributed package. Right now, I see that a mesh network is being created between the processes. Any help will be greatly appreciated.