Hello , everyone , when I want implement this statement a=a+b ,I know it could be backward.
But if I want to implement this statement a=a+b , I know this action trigger a inplace operation because of a is a array ,and it can not be backwardin pytorch 1.2.0 , it’s memeroy has been changed I think .
And this action seems can be done in pytorch 1.1.0 but not in pytorch 1.2.0 . But if I want to implement like that , what should I write ?
Thanks for you reply and help.
Below is my wrong log .
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [212, 256, 7, 7]], which is output 0 of IndexPutBackward, is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
@Nikronic Thanks for your help , friend .I solve it .
After reading your posts and pytorch doc about torch.clone() ,
clone() → Tensor
Returns a copy of the self tensor. The copy has the same size and data type as self.
Unlike copy_(), this function is recorded in the computation graph. Gradients propagating to the cloned tensor will propagate to the original tensor.
I changed my code to this , and it could been backward well.