Keep inter-state variable through loss.backward()

I am trying on time-series control simulation by means of a LSTM controller, which uses response sensor signal to gernerate driving signal. The target is to minimize the response.

When apply loss.backward(), the inter-state variable (intermediate value) will be auto-deleted.

It tells
“Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.”

However, this inter-state variable record the final state of last step, which is critical for the simulation.

I have tried loss.backward(retain_graph=True),

it tells
“RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32, 6]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!”

How could I keep the inter-state variable through loss.backward(),
Any suggestion?

I learn a lot from the post below. However, I still cannot solve it.

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time - PyTorch Forums

Best regards.
Chen