Leaf variable has been moved into the graph interior error in custom RNN implementation

I am implementing custom RNN. It sounds weird but it should have separate hidden states for all inputs. I will use it inside of graph convolution. As an input, it takes edge weights one by one. I am sharing the restricted case where I have only 3 inputs as an illustration.

class RNN(nn.Module):
    def __init__(self):
        super(RNN,self).__init__()
        self.weight = Parameter(torch.rand(3,1, requires_grad=True))
        self.weight_h = Parameter(torch.rand(3,1, requires_grad=True))
        self.hidden = Parameter(torch.rand(3,1,requires_grad=True))
        self.bias = Parameter(torch.rand(3,1, requires_grad=True))
        self.tanh = Tanh()
        self.iteration = 0
    
    def forward(self, x):
        self.hidden[self.iteration] = self.tanh(x*self.weight[self.iteration] + self.hidden[self.iteration].clone()*self.weight_h[self.iteration] + self.bias[self.iteration])
        self.iteration = self.iteration + 1 
        return self.hidden[self.iteration-1]

model = RNN()
mael = nn.L1Loss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
x = torch.rand(3,1)
for i in range(2):
    optimizer.zero_grad()
    out = model(x[i])
    loss = mael(out, x[i+1])
    loss.backward()
    optimizer.step()
    print(loss.item())

However, the code gives these error messages. Probably, the hidden state is changed in-place operation. I could not find a way to update a hidden state.

RuntimeError                              Traceback (most recent call last)
<ipython-input-11-24c2beaf081c> in <module>()
      7     out = model(x[i])
      8     loss = mael(out, x[i+1])
----> 9     loss.backward()
     10     optimizer.step()
     11     print(loss.item())
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
    130     Variable._execution_engine.run_backward(
    131         tensors, grad_tensors_, retain_graph, create_graph,
--> 132         allow_unreachable=True)  # allow_unreachable flag
    133 
    134 

RuntimeError: leaf variable has been moved into the graph interior

Tanks in advance.

Hi,

Yes this line: self.hidden[self.iteration] = XXX is modifying the hidden state inplace which leads to the error you see.
Do you actually want to learn the hidden state here? because it seems that you overwrite the value that you were supposed to learn.

Actually what I want is creating hidden states for each edge in the graph so that each time point corresponding hidden states will be updated.

Is this how ben works? We should calculate current hidden state by using previous one.

Yes, but here you are modifying the original Parameter when you compute the new one. Maybe you just want it to be a temporary Tensor in your forward function that should not modify the original Parameter?