It always create Variable automatically, not Tensor

I installed pytorch gpu ver. on my mac with eGPU.
Tensor and Variable are separated with cpu. However, with gpu, Tensor is not created but Variable.
I have not seen yet any error about this issue e.g. dataset or dataloader.
Why does it happen?
Will not there be any problems?

test code following

x = torch.randn(2,2)
print(x)
print(type(x))
# RuntimeError: cannot call .data on a torch.Tensor: did you intend to use autograd.Variable?
print(x.data) # comment in cpu ver
print(type(x.data)) # comment in cpu ver

y = Variable(x)
print(y)
print(type(y))
print(y.data)
print(type(y.data))

output on gpu ver.

 0.3674  0.7864
-1.3684  0.5154
[torch.FloatTensor of size (2,2)]

<class 'torch.autograd.variable.Variable'>

 0.3674  0.7864
-1.3684  0.5154
[torch.FloatTensor of size (2,2)]

<class 'torch.autograd.variable.Variable'>

 0.3674  0.7864
-1.3684  0.5154
[torch.FloatTensor of size (2,2)]

<class 'torch.autograd.variable.Variable'>

 0.3674  0.7864
-1.3684  0.5154
[torch.FloatTensor of size (2,2)]

<class 'torch.autograd.variable.Variable'>

output with cpu ver.


 1.1078 -0.8237
 1.3302  0.4462
[torch.FloatTensor of size 2x2]

<class 'torch.FloatTensor'>
Variable containing:
 1.1078 -0.8237
 1.3302  0.4462
[torch.FloatTensor of size 2x2]

<class 'torch.autograd.variable.Variable'>

 1.1078 -0.8237
 1.3302  0.4462
[torch.FloatTensor of size 2x2]

<class 'torch.FloatTensor'>

Pytorch 0.4 overcomes the separation of Variable and Tensor. Nothing to worry about.

Best regards

Thomas

1 Like

I worried about my installation is wrong but that is pytorch version up!
Thank you so much!!!