Tensor.is_cuda not working?

I cannot get a value of True from Tensor.is_cuda, even though moving the tensor to the GPU seems to go with no problem. Here is a simple test program:

import torch

avail = torch.cuda.is_available()
print("Cuda available: ", avail)

a = torch.randn(2, 2)
print(a)
print("Moving a to the GPU...")
a.cuda()
verif = a.is_cuda
print("verification = ", verif)

Output looks like this:

(cda) $ !py
python ttt.py
Cuda available:  True
tensor([[0.8693, 2.1210],
        [1.4204, 0.1404]])
Moving a to the GPU...
verification =  False

I am running Ubuntu 22.04 with a RTX-3080. Python is 3.11 and torch version is 2.1.0+cu118. nvidia-smi shows CUDA version 11.2 and driver version 535.113.01. I just did a new rebuild of everything a couple weeks ago. What might be going wrong here?

This is not just an academic exercise. When I try running some torch-based NN training code, it complains (in F.linear()) that not all tensors are on the same device, even though I explicitly move them there with Tensor.cuda().

Thanks.

.cuda() and .to() calls on tensors are not executed inplace and you would need to assign the result.
This should fix it:

a = a.cuda()
1 Like

Got it - thanks for the quick response!