Let me see if I understand (it seems the accepted answer here is outdated, .data
is not in the library or going to be removed according to what I’ve read in other answers with from albanD).
.clone()
produces a new tensor instance with a new memory allocation to the tensor data. In addition it remembers the history of the original tensor and is connected to the earlier graph and appears as CloenBackward
. The main advantage it seems is that its safer wrt in-place ops afaik.
deepcopy
make a deep copy of the original tensor meaning it creates a new tensor instance with a new memory allocation to the tensor data (it definitively does this part correctly from my tests). I assume it also does a complete copy of the history too, either pointing to the old history or create a brand new deep copy history. I’m unsure how to test this but I believe if it is to behave as a proper deep copy method then it should create a new history that is a mirror of the earlier (instead of just pointing to it).
Test I did wrt memory allocation:
def clone_vs_deepcopy():
import copy
import torch
x = torch.tensor([1,2,3.])
x_clone = x.clone()
x_deep_copy = copy.deepcopy(x)
#
x.mul_(-1)
print(f'x = {x}')
print(f'x_clone = {x_clone}')
print(f'x_deep_copy = {x_deep_copy}')
output
x = tensor([-1., -2., -3.])
x_clone = tensor([1., 2., 3.])
x_deep_copy = tensor([1., 2., 3.])
since neither changed it must be a different memory. I just realized I could have checked it with id
or something…alas.
I am still seeking clarification on the history part. Is it a deep copy of that or a pointer copy if we use deep copy?
I know for know for clone it is a pointer copy to the original history and not a complete deep copy.
related: