Pytorch loss backward error

Hi there!

one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1024, 112]], which is output 0 of struct torch::autograd::CopySlices, is at version 112; expected version 111 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

How could I fix this problem?


What does the extra backtrace show is the issue?
It should point to the forward op whose result was modified inplace even though it needed it to compute some gradients.