Difference between Tensor.clone() and Tensor.new_tensor()

Can you please explain a difference between Tensor.clone() and Tensor.new_tensor()? According to the documentation, Tensor.new_tensor(x) = x.clone().detach(). Additionally, according to this post on the PyTorch forum and this documentation page, x.clone() still maintains a connection with the computation graph of the original tensor (namely x). However, I am new to PyTorch and don’t quite understand how x.clone() interacts with the computation graph of x.

1 Like

So, s you said, x.clone() maintains the connection with the computation graph. That means, if you use the new cloned tensor, and derive the loss from the new one, the gradients of that loss can be computed all the way back even beyond the point where the new tensor was created. However, if you detach the new tensor, as it is done in the case of .new_tensor(), then the gradients will only be computed from loss backward up to that new tensor but not further than that.

I hope this helps!

6 Likes

I see. It makes a lot of sense. Thank you very much.

Hi, Vahid! Say if have a tensor ‘A’, then I use B=A.clone(); finally, I get my loss=Lossfunc([g(B);A]) where ‘;’ denotes concatenation. Do the gradients of A include two parts where the first part is directly from the A in the concatenation and the second part is from B? The function g(.) will not affect A. So basically I mean there is a residual style connection in the computation graph. Is that right?