Trying to get data from a tensor with
tensor.data[0]
and I get
AttributeError: 'DoubleTensor' object has no attribute 'data'
Any idea why this is ?
Trying to get data from a tensor with
tensor.data[0]
and I get
AttributeError: 'DoubleTensor' object has no attribute 'data'
Any idea why this is ?
Tensors donât have a data attribute (Variables do). Just use tensor[0]
.
(Variable is a wrapper around tensor that supports automatic differentiation. Variable.data is the underlying tensor)
Now I realize what the problem is. One of my objects is a variable, the other is a tensor.
<class 'torch.autograd.variable.Variable'>
<class 'torch.FloatTensor'>
How can I convert from the former to the later (autograd variable to floatTesnor) ?
EDIT:
Oh , now I understand, to go from variable to tensor, you just use
variable.data
extra info, FWIW: you will get an error in 0.5 onwards if you try tensor(0)
UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number
I think for pytorch0.4 the tensor
and tensor.data
is the same thing now.
In [6]: type(x)
Out[6]: torch.Tensor
In [7]: type(x.data)
Out[7]: torch.Tensor
In [8]: x.__class__
Out[8]: torch.Tensor
In [9]: x.data.__class__
Out[9]: torch.Tensor
.data
should be used carefully, as it detaches the tensor from the computation graph and might lead to wrong results.
It still has similar semantics as in the previous versions.
Itâs safer to use tensor.detach()
instead.
Thanks! Iâd like to ask one more question. whatâs the meaning of âmight lead toâŚâ.
So .data
is not exactly the same as .detach
?
Yes, thatâs correct.
Both share the underlying data of the tensor and have requires_grad=False
.
While using x.data
is unrelated to the computation graph, x.detach
will have its in-place changes reported by autograd if x is needed in backward and will raise an error if necessary.
There is an example in the Migration Guide in the âWhat about .data
?â section.
Thanks, @ptrblck!
So to summarize, they are both used to detach tensor from computation graph and returns a tensor that shares the same data, the difference is x.detach()
adds another constrain that when the data is changed in-place, the backward wontât be done.
So why we still need x.data
, it this just a historical reason?
Itâs still used in e.g. optimizers to update the parameters. Although itâs not recommended to use it, there are still valid use cases for .data
.