Gradient computation error in inplace operation

I have an encoder-decode model and a function that calculating something iteratively:

f(a,b,c):
   # a: [batchsize, n, channel]
   res_a = torch.empty(a.shape)
   res_b = torch.empty(b.shape)
   for i in range(a.shape[1]):
        res_a[:,i] = balabala
        res_b[:,i] = balabala
   return torch.cat([res_a,res_b],-1)

This is the main procedure of forward function:

def forward(x):
    a, b, c = x[0:i], x[i:j], x[j:k]
    q = f(a,b,c)
    x' = decoder(encoder(x))
    a, b, c = x'[0:i], x'[i:j], x'[j:k]
    q' = f(a,b,c)
    return q, q'

The loss is the difference of q and q1

Then an error occurs when backward for the computation of q':

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [64, 3]], which is output 0 of SelectBackward, is at version 21; expected version 20 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

I really have no idea where I modify the variables for gradient computation, all I do in function f is just indexing some data from input and conduct some computations (add, multiply), then store the result to a new tensor.