How to fix in-place operation in recurrent cycle Pytorch

So, when I’m calling

for data, label in train_loader:  
      a, cutoff = model(torch.t(data),R,1)
      loss = (cutoff-label)**2

I have this

RuntimeError: one of the variables needed for gradient computation 
has been modified by an inplace operation: [torch.FloatTensor [1, 1]], 
which is output 0 of SelectBackward, is at version 51; 
expected version 50 instead. Hint: the backtrace further above shows 
the operation that failed to compute its gradient. The variable 
in question was changed in there or anywhere later. Good luck!   

There is my Net forward function

def forward(self, x,R,L):
          a = torch.zeros(R,L,1)
          for cycle in range (R):
           if cycle ==0:  

           a[cycle]= relu(,a[cycle-1])+ (self.w*,x)))            
          cutoff =(sqr(torch.norm(,a[R-1])))/sqr(torch.norm(x)))
          return a, cutoff

And problem is in

a[cycle] = .....a[cycle-1]....

And that cycle is changing
How can I fix this in-place operation or cycle?

There is Traceback:

Traceback (most recent call last):

  File "C:\Users\A877\", line 69, in <module>

  File "C:\Users\A877\.conda\envs\test\lib\site-packages\torch\", line 195, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)

  File "C:\Users\A877\.conda\envs\test\lib\site-packages\torch\autograd\", line 97, in backward

You can change a to a list, append Lx1 tensors to it, and use torch.stack after the loop. Also, if self.w is scalar, your loop can be replaced with nn.RNNCell applied to rescaled x. (if it is a trainable vector, it is too tied to W_ih to be useful, I think).