TypeError: can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first

I am using a modified predict.py for testing a pruned SqueezeNet Model

[phung@archlinux SqueezeNet-Pruning]$ python predict.py --image 3_100.jpg --model model_prunned --num_class 2
prediction in progress
Traceback (most recent call last):
File “predict.py”, line 66, in
prediction = predict_image(imagepath)
File “predict.py”, line 52, in predict_image
index = output.data.numpy().argmax()
TypeError: can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
[phung@archlinux SqueezeNet-Pruning]$

I understand that numpy does not support gpu yet.

How shall I modify the code to get away from this error without invoking tensor copy data operation ?

1 Like

To solve your issue, you can use output.argmax(), as PyTorch has argmax operation too. https://pytorch.org/docs/stable/torch.html?highlight=argmax#torch.argmax

2 Likes

@smth if I use output.argmax() , problem seems to have solved. I will confirm this later.

@Amrit_Das I have some problem extracting the class mapping information correctly as shown in the error below:

[phung@archlinux SqueezeNet-Pruning]$ python predict.py --image 3_100.jpg --model model_prunned --num_class 2
prediction in progress
prediction = tensor(0, device=‘cuda:0’)
l[0] = Lemon
l[1] = 0
l[0] = Orange
l[1] = 1
class_map[0] = Lemon
class_map[1] = Orange
Traceback (most recent call last):
File “predict.py”, line 72, in
name = class_mapping(prediction)
File “predict.py”, line 65, in class_mapping
return class_map[str(index)]
KeyError: “tensor(0, device=‘cuda:0’)”
[phung@archlinux SqueezeNet-Pruning]$

I have solved all problems and it looks like I can do the predict.py without any more problems

Please see the updated code below:

#Modified from https://github.com/amrit-das/Custom-Model-Training-PyTorch/blob/master/predict.py
import torch
import torch.nn as nn
#from torchvision.models import resnet18
from torchvision.transforms import transforms
import matplotlib.pyplot as plt
import numpy as np
from torch.autograd import Variable
import torch.functional as F
from PIL import Image
import os
import sys
import argparse
from prune import *
from finetune import *

parser = argparse.ArgumentParser(description = 'To Predict from a trained model')
parser.add_argument('-i','--image', dest = 'image_name', required = True, help='Path to the image file')
parser.add_argument('-m','--model', dest = 'model_name', required = True, help='Path to the model')
parser.add_argument('-n','--num_class',dest = 'num_classes', required = True, help='Number of training classes')
args = parser.parse_args()

#model=ModifiedSqueezeNetModel().cuda()
#model = torch.load(args.model_name).cuda()
#model = resnet18(num_classes = int(args.num_classes))

path_to_model = "./"+args.model_name
#checkpoint = torch.load(path_to_model)
model = torch.load(path_to_model)

#model.load_state_dict(checkpoint)
#model.eval()

def predict_image(image_path):
    print("prediction in progress")
    image = Image.open(image_path)
    transformation = transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])
    image_tensor = transformation(image).float()
    image_tensor = image_tensor.unsqueeze_(0)

    if torch.cuda.is_available():
        image_tensor.cuda()

    input = Variable(image_tensor).cuda()
    output = model(input)

    #index = output.argmax()
    #print("output = ", output)
    max_value, max_index = torch.max(output,1)

    return max_index.item()

def class_mapping(index):
    mapping=open('class_mapping.txt','r')
    class_map={}
    for line in mapping:
        l=line.strip('\n').split('=')
        class_map[l[1]]=l[0]
        #print("l[0] = ", l[0])
        #print("l[1] = ", l[1])
    #print("class_map[0] = ", class_map[str(0)])
    #print("class_map[1] = ", class_map[str(1)])
    return class_map[str(index)]

if __name__ == "__main__":

    imagepath = "./test/Lemon/"+args.image_name
    prediction = predict_image(imagepath)
    #print("prediction = ", str(prediction))
    name = class_mapping(prediction)
    print("Predicted Class: ",name)

what did you add exactly to solve it ?

Regardless of the problem in the code shown above, this issue happens when one tries to convert a tensor value stored in the GPU (cuda) to numpy, if the tensor variable is called x, then, x.numpy() would raise such an error. The solution is easy by bringing back the data to the cpu and then using numpy, as follows:
x.cpu().numpy()

If, however, one is interested in finding argmax, this could be done via output.argmax() as PyTorch does support it.

12 Likes

This worked for me… Thanks