Why torch.tensor(source_tensor) is not preferred?

As the title says, why is torch.tensor(source_tensor) not preferred and why tensor.clone().detach() is more preferred when a tensor is copied?

Hi,

Mostly for clarity. tensor.clone().detach() makes it very clear what happens to the Tensor: You first allocate new memory for it then detach it from the autograd graph from the original one.

torch.tensor(source_tensor) does the same thing but you can easily forget it and have hard-to-debug issues when using it.

1 Like

Thank you very much.