Problems training a model once, the interpreter says that a tensor required for the backpropagation was modified but i dont see where

Hi Ranon!

I am going to speculate as follows:

The first time through, hidden doesn’t depend on the parameters of lstm.
However, the second time through the (old) hidden depends on the parameters
of lstm before they were updated by optimizer.step().

In general, optimizer.step() modifies inplace the parameters of the model
being optimized.

When you call .backward() the second time, loss, through target, depends
on the old hidden, which depends on the old lstm parameters, so you get
the inplace-modification error.

Would it make sense for your use case (assuming that this is the cause of your
error) to .detach() the old hidden from the computation graph? You would
still pass the old hidden into model so the new pred and hidden would
depend on the old hidden’s values, but those values would be .detach()ed
from their dependency on the old values of lstm’s parameters.

Aside from this speculation, please see the suggestions for locating and fixing
inplace-modification errors given in the following post:

Good luck!

K. Frank

1 Like