Can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first

def display_prediction(img,true_label):
    seg_pred = model(img_test.unsqueeze(0).cuda())
    fig,axes = plt.subplots(nrows = 1,ncols=3,figsize = (10,10))
    axes[0].imshow(img.permute(1,2,0))
    axes[0].set_title('Image')
    axes[1].imshow(seg_pred[0].squeeze().detach(),cmap = 'gray')
    axes[1].set_title('Prediction')
    axes[2].imshow(true_label.squeeze(),cmap = 'gray')
    axes[2].set_title('Ground Truth')
display_prediction(img_test,lb_test)

i am ruuning my code on google colab and i get this error , how can i solve it?

1 Like

You need to allocate the tensor in RAM by using

model(img_test.unsqueeze(0).cuda()).deatch().cpu().clone().numpy()

which means that you are going to:
deatch --> cut computational graph
cpu --> allocate tensor in RAM
clone --> clone the tensor not to modify the output in-place
numpy --> port tensor to numpy

Note: permute is a pytorch function, if you map it into a numpy tensor you should use transpose

3 Likes

Now it gives this error

'Tensor' object has no attribute 'deatch'

the same when i try the word detach instead of deatch

1 Like

Hi,

Which version of pytorch are you using? .detach() was introduced in 0.4 if i’m not mistaken.

i am using 1.0.1. version

Then Tensor does have a .detach() method. Make sure you call it on a Tensor.

Also you use both img and seg_pred in your code. Make sure to do the .detach().cpu().numpy() (the .clone() is not necessary in this case I think, if you get an error from numpy saying that you try to modify a read-only array, then add it back) to each of them if you need a numpy array from them.

1 Like

actually, the error now is related to numpy

'numpy.ndarray' object has no attribute 'detach'

You try and call .detach() on something that is already a numpy array as stated in the error.

so what i should i do ? if i delete model(img_test.unsqueeze(0).cuda()).deatch().cpu().clone().numpy()
i go back to the first error

Can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first 

I think the simplest is going for you to print the objects. And check whether you have a Tensor (if not specified, it’s on the cpu, otherwise it will tell your it’s a cuda Tensor) or a np.array.

You need to give a Tensor to your model, torch operations and np.array to everything else.

To go from np.array to cpu Tensor, use torch.from_numpy().
To go from cpu Tensor to gpu Tensor, use .cuda().
To go from a Tensor that requires_grad to one that does not, use .detach() (in your case, your net output will most likely requires gradients and so it’s output will need to be detached).
To go from a gpu Tensor to cpu Tensor, use .cpu().
Tp gp from a cpu Tensor to np.array, use .numpy().

8 Likes

Thank you @albanD and @JuanFMontesinos it works by using model(img_test.unsqueeze(0).cuda()).cpu()
thank you again :smiley:

1 Like

for me its Giving this Error
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Double post from here.

To solve this problem move to the plot.py file in the utils folder and make these modifications to the output_to_target function