RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [19, 140, 32]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace furt

Hi Herbert!

I don’t see the cause of your inplace-modification error, but here are some
things to look at:

Can you locate a tensor of shape [19, 140, 32], perhaps the output of
the self.relu() in line 462?

Try checking its ._version property right after the relu() and then again
right before you call loss.backward(). If its ._version changes, then it’s
being modified inplace. If ._version changes from 0 to 1, then you most
likely have found your culprit, as that agrees with the version mismatch that
autograd is reporting.

For some discussion about what can cause inplace-modification errors
and how to find and fix them, see this post:

Best.

K. Frank