TypeError: can't convert cuda:0 device type tensor to numpy.

When I am trying to run my code in CoLab, I receive the following error

TypeError: can’t convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

which is coming from this line in my function

def to_torch(x, use_gpu=True, dtype=np.float32):
    x = np.array(x, dtype=dtype)
    var = torch.from_numpy(x)
    return var.cuda() if use_gpu is not None else var

What I understand is that x the one inside np.array() is a tensor that is coming from a GPU. As a matter of fact it is a list having one element which is a GPU tensor

I tried replacing the x inside np.array by x[0].cpu() or x[0].detach.cpu() but for some reason it gives an error saying that these are integers

I tried x.cpu() it also raises an error since x is a list

This is the code link in GitHub. I used the following commands to run in CoLab

!git clone https://github.com/SalwaMostafa/marl-ae-comm.git

import os

os.chdir("/content/marl-ae-comm/marl-grid/find-goal/") 

!pwd

!pip3 install -e ../env

!pip install -r requirements.txt

import os

os.chdir("/content/marl-ae-comm/marl-grid/find-goal/") 

!python train.py --set num_workers 10 env_cfg.comm_len 10 env_cfg.num_agents 2 env_cfg.view_size 8 env_cfg.max_steps 1024 train_iter 3000000 --gpu 0

The error is in this function

I appreciate if someone can help me debug this error. Thank you so much in advance.

I don’t fully understand the idea behind to_torch.
If the input argument x is already a CUDATensor, you wouldn’t need to push it to the CPU first, transform it into a numpy array, transform it back to a PyTorch CPUTensor, and finally push it back to the GPU.
Wouldn’t it work to check the type of x and just return it if it’s already a tensor?

1 Like

Thank you so much for your reply. x is a list and whenever I add a .cpu() it raises an error that no cpu attribute.

The to_tourch is called inside this function line 136. I still do not understand why this error.

The cpu() attribute is defined on tensors and is not available in lists.
The error is raised, since you are passing (a list of) CUDATensors to this method as seen here:

def to_torch(x, use_gpu=True, dtype=np.float32):
    x = np.array(x, dtype=dtype)
    var = torch.from_numpy(x)
    return var.cuda() if use_gpu is not None else var


x = [1, 2]
out = to_torch(x)
print(out)
# tensor([1., 2.], device='cuda:0')

x = [torch.tensor(1), torch.tensor(2)]
out = to_torch(x)
print(out)
# tensor([1., 2.], device='cuda:0')

# your error
x = torch.tensor(1).cuda()
out = to_torch(x)
# TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

x = [torch.tensor(1).cuda(), torch.tensor(2).cuda()]
out = to_torch(x)
# TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
1 Like

Thank you so much for your help. I understand the problem now.