About if gradient is computed correctly

Hi there,

I met this error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.DoubleTensor [16, 4, 4]], which is output 0 of SliceBackward, is at version 401; expected versio
n 399 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

Then after fixing the inplace operation, there is no warning of this kind, does this mean that the gradient is calculated correctly?

I reused the output of the model several times before compute the loss which looks like:

param = model(something)
pred = new tensor
for xx iteration:
    new_pred = pred
    new_pred[certain slice] += param * pred[certain slice]
    pred = new_pred
loss(pred, target)

Will this cause any problem for deciding on what value loss should dependent on? (or should I keep the update history of pred in separate address rather than directly modify it?)

Thanks a lot!