seg_pred = model(img_test.unsqueeze(0).cuda())
fig,axes = plt.subplots(nrows = 1,ncols=3,figsize = (10,10))
axes.imshow(seg_pred.squeeze().detach(),cmap = 'gray')
axes.imshow(true_label.squeeze(),cmap = 'gray')
i am ruuning my code on google colab and i get this error , how can i solve it?
You need to allocate the tensor in RAM by using
which means that you are going to:
deatch --> cut computational graph
cpu --> allocate tensor in RAM
clone --> clone the tensor not to modify the output in-place
numpy --> port tensor to numpy
Note: permute is a pytorch function, if you map it into a numpy tensor you should use transpose
Now it gives this error
'Tensor' object has no attribute 'deatch'
the same when i try the word detach instead of deatch
Which version of pytorch are you using?
.detach() was introduced in 0.4 if i’m not mistaken.
i am using 1.0.1. version
Tensor does have a
.detach() method. Make sure you call it on a Tensor.
Also you use both
seg_pred in your code. Make sure to do the
.clone() is not necessary in this case I think, if you get an error from numpy saying that you try to modify a read-only array, then add it back) to each of them if you need a numpy array from them.
actually, the error now is related to numpy
'numpy.ndarray' object has no attribute 'detach'
You try and call
.detach() on something that is already a numpy array as stated in the error.
so what i should i do ? if i delete
i go back to the first error
Can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first
I think the simplest is going for you to print the objects. And check whether you have a Tensor (if not specified, it’s on the cpu, otherwise it will tell your it’s a cuda Tensor) or a np.array.
You need to give a Tensor to your model, torch operations and np.array to everything else.
To go from np.array to cpu Tensor, use
To go from cpu Tensor to gpu Tensor, use
To go from a Tensor that requires_grad to one that does not, use
.detach() (in your case, your net output will most likely requires gradients and so it’s output will need to be detached).
To go from a gpu Tensor to cpu Tensor, use
Tp gp from a cpu Tensor to np.array, use
Thank you @albanD and @JuanFMontesinos it works by using
thank you again
for me its Giving this Error
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
To solve this problem move to the plot.py file in the utils folder and make these modifications to the output_to_target function