hi,
so i saw some posts about difference between setting torch.cuda.FloatTensor and settint tensor.to(device=‘cuda’)
i’m still a bit confused. are they completely interchangeable commands?
is there a difference between performing a computation on gpu and moving a tensor to gpu memory?
i mean, is there a case where i want to utilize the gpu for speed but still memorize the tensor on cpu?
maybe someone can point me in the right direction
Changing the type to torch.cuda.FloatTensor
would not only push the tensor to the default GPU but would also potentially transform the data type.
The to('cuda:id')
operation would only transform the tensor to the specified device as seen here:
x = torch.tensor([1])
print(x.type())
> torch.LongTensor
y = x.type(torch.cuda.FloatTensor)
print(y, y.type())
> tensor([1.], device='cuda:0') torch.cuda.FloatTensor
z = x.to('cuda')
print(z, z.type())
> tensor([1], device='cuda:0') torch.cuda.LongTensor
That’s not possible, since the GPU operates on the data stored on the GPU.
(Note that unified memory would still copy the data to the device if needed.)