Using torch.nn.DataParallel with torch.distributions.Laplace throws TypeError: 'Laplace' object is not iterable

I am trying to train my model on multiple GPUs but I have some trouble with torch.distributions.Laplace that I call in the forward pass.
I have uploaded a minimal working example that runs fine without torch.nn.DataParallel but fails when using it.

Is there any way to make this code run on multiple GPUs?

I wasn’t able to reproduce the error. On which pytorch version are you? I tested on 1.6.

hmm interesting, me too I am on 1.6.0. I checked again and found out that at least two GPUs need to be available to reproduce the error

That was dumb of me. I forgot on single GPU the code behaves identical.

Is there any specific reason for using DataParallel instead of DistributedDataParallel? I have experience with only single GPU machines so I don’t know about the details here.

1 Like

no particular reason, I have just seen more examples using DataParallel :slight_smile:
But could be worth trying out if things look differently with DistributedDataParallel

1 Like

DistributedDataParallel seems to work without problems, thanks for the hint :slight_smile:
I have uploaded a gist showing how the code now runs with world_size=2

1 Like