Multinode DDP runs out of CUDA memory but not single node

When training on multiple nodes with DistributedDataParallelism, my training runs out of memory reliably by the 30th iteration, but the same batch size on fewer GPUs are able to train fine. I expect all GPUS to be doing the same thing - however this doesn’t seem to be the case - what aspects of pytorch could cause differ behaviors on different ranks? All processes are on the same type of GPU, I am running pytorch 1.0rc1.

2 Likes