Equivalent denoting for b=Variable(a.data) when from 0.3 to 1.0?

Hi, all:
I felt confused when trying to porting some existing source code from 0.3 to 1.0.
One of the most occured situation is trying to porting this command:

b=Variable(a.data)

Personally I have several avalible approach as :

b=a.data
b=a.detach()
b=a.clone().detach()

I wonder which one is the correct way ? Or is there any canonical way to do it?
Thank you.

b = Variable(a)

In PyTorch 0.4.0 and above, Tensors and Variables have merged. This means that you don’t need the Variable wrapper everywhere in your code anymore.

.data

.data was the primary way to get the underlying Tensor from a Variable . After the merge of Tensors and Variables, calling y = x.data still has similar semantics. So y will be a Tensor that shares the same data with x , is unrelated with the computation history of x , and has requires_grad=False.

.detach()

PyTorch keeps track of all operations involving tensors for which the gradient may need to be computed (i.e., require_grad is True). The operations are recorded as a directed graph. The detach() method constructs a new view on a tensor which is declared not to need gradients, i.e., it is to be excluded from further tracking of operations, and therefore the subgraph involving this view is not recorded. tensor.detach() creates a tensor that shares storage with tensor that does not require grad.

.clone().detach()

tensor.clone() creates a copy of tensor that imitates the original tensor 's requires_grad field. And detach() is acted upon the copied tensor, not on the original tensor.

You can use any of these according to the way you handled Tensor a. I think, b = a.data will do for you.

2 Likes