Inplace operation error moving to different torch/cuda versions


I recently moved from a machine where I was using python3.6, torch 0.4.1 and cuda 9.0, to a machine where I have installed with virtualenv python3.6 and torch 0.4.0, the cuda version on the machine is 8.0.

My code was working perfectly on the first machine, but it is giving “inplace operation error” on the second machine.
I have tried to inspect the code to understand where the inplace operation is happening, but actually I can’t understand, the problem is even not in the set of statements where I was expecting it to be.

My code looks like:

char_rep_lst = []
for i in range(sequence_length):
     self.char_hidden = autograd.Variable(torch.zeros(2, self.batch_size, self.char_hidden_dim).type(self.dtype))

     char_embeds = self.embed_dropout( self.char_embeddings( char_sequence[i,:,:] ) )
     char_rep_seq, self.char_hidden = self.charRNN(char_embeds, self.char_hidden)
     char_rep_lst.append( self.char_mlp( char_rep_seq.sum(dim=0) ) )

char_rep = torch.stack( char_rep_lst )
word_embeds = self.embed_dropout( self.word_embeddings(sentence)  )
lexical_input = [word_embeds, self.hidden_dropout( char_rep )], 2)
lex_rep, self.lex_hidden = self.lexRNN(lexical_input, self.lex_hidden)
lex_rnn_out = [lex_rep, char_rep], 2)
scores = F.log_softmax(self.hidden2tag(lex_rnn_out), dim = 2)

return [scores, scores, scores]

In the last statement I return 3 scores because this is actually a simplification of the whole code, that I did to try to understand where the inplace operation is. In the whole code I return 3 different scores and I do 2 optimization steps with 2 different optimizers, keeping the graphs with “retain_graph = True”. It is indeed the first “backward(retain_graph = True)” triggering the “inplace operation error”.

The self.char_mlp is a MLP class, defined as:

Does anyone understand where the inplace operation is ?
Is there a way to see which particular line is doing the inplace operation ? I mean, instead of seeing the trace ending up with the backward call…

Thank you in advance for any help