I felt confused when trying to porting some existing source code from 0.3 to 1.0.
One of the most occured situation is trying to porting this command:
Personally I have several avalible approach as :
I wonder which one is the correct way ? Or is there any canonical way to do it?
b = Variable(a)
In PyTorch 0.4.0 and above, Tensors and
Variables have merged. This means that you don’t need the
Variable wrapper everywhere in your code anymore.
.data was the primary way to get the underlying
Tensor from a
Variable . After the merge of
y = x.data still has similar semantics. So
y will be a
Tensor that shares the same data with
x , is unrelated with the computation history of
x , and has
PyTorch keeps track of all operations involving tensors for which the gradient may need to be computed (i.e.,
require_grad is True). The operations are recorded as a directed graph. The
detach() method constructs a new view on a tensor which is declared not to need gradients, i.e., it is to be excluded from further tracking of operations, and therefore the subgraph involving this view is not recorded.
tensor.detach() creates a tensor that shares storage with
tensor that does not require grad.
tensor.clone() creates a copy of tensor that imitates the original
requires_grad field. And
detach() is acted upon the copied tensor, not on the original tensor.
You can use any of these according to the way you handled Tensor
a. I think,
b = a.data will do for you.