DistributedDataParallel do not work with custom function in model

Im trying to use this model


But getting this error

File “/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py”, line 60, in worker
output = module(*input, **kwargs)
File “/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “/home/ec2-user/SageMaker/tacotron2/model/model.py”, line 480, in forward
output_lengths)
File “/home/ec2-user/SageMaker/tacotron2/model/model.py”, line 452, in parse_output
outputs[0].data.masked_fill
(mask, 0.0)
RuntimeError: The expanded size of the tensor (1079) must match the existing size (836) at non-singleton dimension 2. Target sizes: [4, 80, 1079]. Tensor sizes: [4, 80, 836]

How to solve it?

Could you provide your DDP code to reproduce the issue? Also, does the model work properly without DDP?

https://colab.research.google.com/drive/104LtQ1zIioIOMQEPgVve77m5Rd4Gm0wU
Yes, it is works fine, also same code works fine on single gpu colab instance.
Tested on 8 v100 instance from amazon.