Training performance degrades with DistributedDataParallel

Maybe you are right. When using DDP+syncbn, bn is computated with a larger batch. The learning rate should be tuned a bit higher (original_lr * num_gpus).

Strangely, using DistributedSampler degrades the performance in my case .

I’m not sure about the effect of DistributedSampler in DistributedDataParallel.

Can you tell me how this monkey patching is done? Which file is this

Hi do you disable cudnn for the whole project, or just on the batchnorm files?

The main reason may be DDP estimates global var~(lr is another reason, but still decrease about 0.2%), see https://github.com/pytorch/pytorch/pull/14267#issuecomment-449125620

source code is pretty straightforward

Hi.

I got the same problem.
Updating pytorch to ver. 1.6.0 didn’t help, although it seems that they fixed several things in SyncBN.

Did anybody got an improvement?

Hi @TT_YY, Have you tried setting torch.backends.cudnn.enabled = False?

Hi @rvarm1

Thank you for your response and sorry for delay of my response.
I will try it.

Thanks.

Did you ever resolve this issue? Using DDP + SyncBN does not help.

3 Likes

Not sure this is the case for you, but in my case I was using autocast and GradScaler. I had both set to enabled=False. According to the docs this should mean these should have no effect, which was in fact the case with a single GPU and using DP.

However, with DDP I found that introducing these increased variance in the training and validation loss significantly, deteriorating model accuracy overall. According to the docs autocast and GradScaler shouldn’t adversely affect DDP, but it did just that in my case. Not sure why, but I assume it has to do with gradient synchronization in DDP.

1 Like

Do you use ‘loss = losses.sum()’ ? DDP default the loss type is average form, and average the gradient over all ranks. If a sum form loss is used, it produces wrong result.

Hi, I got the same issue. Did you solve the problem?