How to reset LSTM hidden state to previous state?

I’d like to keep track of an LSTM hidden state and at some point revert to an earlier state. I’m running into errors when I try to reuse the previous state. I’m brand new to pytorch and any help would be greatly appreciated!

There are two things I’ve tried so far. The first is to save off the output Variables and try to reuse them later. This leads to an error in backwards() since we are going over the graph again and it has been discarded already. This makes sense to me and retaining the graph seems to not be what I want.

The second is to get the Tensors out of the output Variables and create new Variables with those saved Tensors. This seems to break autograd, giving me a ‘there are no graph nodes that require computing gradients’ runtime error. I don’t understand why this is the case.

Thanks in advance!

I’m not sure what is going wrong, but you should only get the second error if there are really no graph nodes requiring a gradient, which should not be the case if you use the hidden states as input to an LSTM and then backpropagate on the output since the LSTM parameters will require gradients.

Thanks for the response. I had an error in my code where I was passing a tensor instead of a variable to the loss function. No wonder there were no graph nodes requiring gradients!