I have broad model in to every gpu in multi_gpu inference, and I set something in rank=0, as bellow:
if rank==0:
model.cfg = ‘my setting’
Dosomething
(in Dosomething, I get model.cfg but I found only rank=0 is ‘my_setting’, so how to broad this to all gpus???
You can use dist.broadcast_object_list
which allows you to broadcast pickable objects across all of your workers: torch.distributed.distributed_c10d — PyTorch 1.8.1 documentation