Training performance degrades with DistributedDataParallel

By “performance”, I mean the classification accuracy. Somehow DDP+SyncBN achieves worse test accuracy than DP, so there must be some problematic differences in the numeric algorithm. The speed isn’t the issue here. Thanks!

3 Likes

My mistake, I got it wrong. Thanks for clarification.

I can only comment on the differences between DP and DDP w.r.t. batch normalization. With DP your module is replicated before each call to forward, which means that only the BN stats from the first replica are kept around. With DDP each process keeps their own version of BN. And with SyncBN you’ll end up with stats that are “more averaged” than the stats kept when using DP, because they only include stats for a batch in a single replica, instead of all replicas.

1 Like

I also encountered this performance issue with DistributedDataParallel, hope someone could give a solution. :slightly_frowning_face:

3 Likes

I found the problem in my code, it’s because of the cudnn batch norm. According to this github issue, the solution is to edit the batchnorm part in torch/nn/functional.py or set torch.backends.cudnn.enabled = False.

2 Likes

Could edit the batchnorm part in torch/nn/functional.py work for sync bn?


The batch norm in torch.nn.functional is used just for evaluation. I think editing this would do nothing to sync batch norm. How do you edit the file to make sync bn work normally?

You are right, although the performance improves after disable cudnn, the gap still remains. I can’t figure out the problem and now I have to use nn.DataParallel :slightly_frowning_face: .

@Mr.Z Do you find the problem? I also get a very worse accuracy when use SyncBN + DDP for batchsize=16( 4 GPUs on one node, 4 images for each GPU), and when I use DataParallel + SyncBN, evrything is OK.

Same here. Performance of DDP model is weaker than one trained on a single GPU. Playing with lr/bs does not help. As number of GPUs in DPP training grows - performance degrades.

Has anyone found the solution ?

UPDT: the reason was found for my case. When training DDP model we need to use DistributedSampler which is passed to Dataloader. We need to train_dataloader.sampler.set_epoch(epoch) on every epoch start.

3 Likes

having the same issue (DP much better validation metrics than DDP). setting

torch.backends.cudnn.enabled = False

slows my runtime down by 3x.

monkey patching torch.nn.functional.batch_norm

def monkey_patch_bn():
    # print(inspect.getsource(torch.nn.functional.batch_norm))
    def batch_norm(input, running_mean, running_var, weight=None, bias=None,
                   training=False, momentum=0.1, eps=1e-5):
        if training:
            size = input.size()
            size_prods = size[0]
            for i in range(len(size) - 2):
                size_prods *= size[i + 2]
            if size_prods == 1:
                raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))

        return torch.batch_norm(
            input, weight, bias, running_mean, running_var,
            training, momentum, eps, False
        )
    torch.nn.functional.batch_norm = batch_norm

doesn’t seem to help.

train_dataloader.sampler.set_epoch(epoch) doesn’t seem to help either.

EDIT:

what does seem to work is dividing my lr by my world_size, although i’m not sure why.

@YueshangGu Hi, How do you use DataParallel + SyncBN at the same time? I though SyncBN only works with DistributedDataParallel.

Another issue spotted in my case: a model has to be transferred to proper device before wrapping into DDP.

Our experience with Kinetics 400 using PyTorch 1.3 on a node with two GPUs as follows:

Single GPU > DP (-0.2%) > DP w/ sync BN (-0.3%)

Single GPU serves as the baseline for DP and DP w/ sync BN.
The tradeoff with distributed training is understandable but sync BN causing worse accuracy is not trivial to ignore.

My setting is same with you, just testing in HMDB51. I also get the results as follows:
DP>DP w/sync BN. Now, Do you find the solution to deal with this issue?

Hi guys, where can i find the code of SyncBN?

Maybe the learning rate is the problem?

Maybe you are right. When using DDP+syncbn, bn is computated with a larger batch. The learning rate should be tuned a bit higher (original_lr * num_gpus).

Strangely, using DistributedSampler degrades the performance in my case .

I’m not sure about the effect of DistributedSampler in DistributedDataParallel.

Can you tell me how this monkey patching is done? Which file is this

Hi do you disable cudnn for the whole project, or just on the batchnorm files?