How to fix in-place operation in recurrent cycle Pytorch

So, when I’m calling

for data, label in train_loader:  
      a, cutoff = model(torch.t(data),R,1)
      loss = (cutoff-label)**2
      loss.backward()   

I have this

RuntimeError: one of the variables needed for gradient computation 
has been modified by an inplace operation: [torch.FloatTensor [1, 1]], 
which is output 0 of SelectBackward, is at version 51; 
expected version 50 instead. Hint: the backtrace further above shows 
the operation that failed to compute its gradient. The variable 
in question was changed in there or anywhere later. Good luck!   

There is my Net forward function

def forward(self, x,R,L):
          a = torch.zeros(R,L,1)
          for cycle in range (R):
           if cycle ==0:  
               a[0]=(self.w*torch.mm(self.wih,x))

           a[cycle]= relu(torch.mm(self.whh,a[cycle-1])+ (self.w*torch.mm(self.wih,x)))            
          cutoff =(sqr(torch.norm(torch.mm(torch.t(self.wih),a[R-1])))/sqr(torch.norm(x)))
          return a, cutoff

And problem is in

a[cycle] = .....a[cycle-1]....

And that cycle is changing
How can I fix this in-place operation or cycle?

There is Traceback:

Traceback (most recent call last):

  File "C:\Users\A877\untitled2.py", line 69, in <module>
    loss.backward()

  File "C:\Users\A877\.conda\envs\test\lib\site-packages\torch\tensor.py", line 195, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)

  File "C:\Users\A877\.conda\envs\test\lib\site-packages\torch\autograd\__init__.py", line 97, in backward
    Variable._execution_engine.run_backward(

You can change a to a list, append Lx1 tensors to it, and use torch.stack after the loop. Also, if self.w is scalar, your loop can be replaced with nn.RNNCell applied to rescaled x. (if it is a trainable vector, it is too tied to W_ih to be useful, I think).