How to broad value into every GPU?

You can use dist.broadcast_object_list which allows you to broadcast pickable objects across all of your workers: torch.distributed.distributed_c10d — PyTorch 1.8.1 documentation