Seq2Seq with DataParallel

Folks, I am trying to run a Seq2Seq training with DataParallel.
For one GPU everything works as expected but for >1 I am encountering “inplace” operation error in the encoder.forward():
RuntimeError: Output 1 of BroadcastBackward is a view and its base or another view of its base has been modified inplace. This view is thch functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.

My forward() is however rather trivial:

emb=self.embedding(input) #pretrained emb, with a real one is error in emb call
out,hidden=lstm(emb,hidden)

My question is, if I am doing it completely wrong or I have really some issue with inplace operation in my code. I am doing it as follow:

enc=someLSTMencClass() #here inside is above mentioned forward()
dec=someLSTMdecClass()

model=Seq2SeqClass(enc,dec) #inside in forward is called enc and dec
modelDP = nn.DataParallel(model, device_ids=[0,1])

for epoch in range(max_epochs):
  for (data1,data2) in train_rand_loader:
       loss = modelDP(input_tensor, target_tensor)

Any hint is welcome. Thanks.

Dodo

try smaller batch_size