In addition, you can try setting torch.backends.cudnn.enabled = False
when training using SyncBatchNorm and DDP, as discussed in Training performance degrades with DistributedDataParallel.
In addition, you can try setting torch.backends.cudnn.enabled = False
when training using SyncBatchNorm and DDP, as discussed in Training performance degrades with DistributedDataParallel.