How to store all hidden representation for each of the word in a sentence using LSTM?

I am facing a problem to store the hidden variables (h and c) while I am looping through the words in a sentence. I am trying to store all the hidden state representation for each word in the sentence and use it for later computation. For example, if we have a sentence with 10 words, I want to store all the hidden states for each word in different Variables.

I am using LSTM based simple encoder as follows.

encoder_output, encoder_hidden = self.encoder(embedded, encoder_hidden)

Whenever I run my code, it works fine in the forward propagation but loss.backward() results in the following error.

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

I have spent significant amount of time to find out the problem and found the reason but I am unable to solve it. I have tried using a list of Variable which should work but still getting the same error.

embedded = self.embedding(input_variable)
encoder_output, encoder_hidden = self.encoder(embedded, (hidden_states[idx], cell_states[idx]))
hidden_states.append(encoder_hidden[0])
cell_states.append(encoder_hidden[1])

Even I tried to clone a Variable, but the problem is not resolved.

Any help would be much appreciated. Thanks.

I have solved the problem. The problem was actually in the input_variable, not in the hidden representations.