Device argument in torch.tensor

Hi everyone,

with pytorch 0.4.0, if I do this:

R = torch.randn((5,5))
print( torch.tensor(R,device='cuda:0').is_cuda )

I get False.
It is normal ?

thanks
y

Hello Yann! I’m getting the same result, but this isn’t how you usually put tensors into your GPU’s VRAM anyway. If you wish to put R into the GPU memory, use R = torch.randn((5,5)).cuda() instead.
print(R.is_cuda) will then return True.

This: print(torch.tensor(torch.randn((5,5), device='cuda:0')).is_cuda) would also return true, but the outer tensor is redundant, so print(torch.randn((5,5), device='cuda:0').is_cuda) is equivalent.

I do believe you found a bug though, since print(torch.tensor([1,2,3],device='cuda:0').is_cuda) returns True.

Thank you very much Anton,

In fact, I was looking for a “pytorch 0.4.0”-style alternative this kind of code:

x = Variable(torch.randn(5, 5)*2, requires_grad=True).cuda()

because of the scaling factor 2, I can’t just write the one-liner you suggested. In the 0.4.0 documentation, it is not clear to me how to write this properly. In the init.py file in the pytorch repo, the initializations are done using the no_grad() function. Here, it would be:

x = torch.empty((5,5),requires_grad=True,device='cuda:0')
with torch.no_grad():
    x.normal_()
    x.mul_(2)

but maybe there is a better (proper one or two lines) way to do this. What about this way:

x = torch.randn(5,5).mul(2).to('cuda:0').requires_grad_()

cheers
y

Don’t mention it ^^ And if that works, then great! I’ll see if there’s some other way, and if that line works tomorrow. It seem like it would check out okay tho. For now, I need to sleep.