Pre trained model on GPU

Hi !!! Basically, I want to use a pre trained model for classifying an image on the GPU. Could someone please help. I did the following : (my model is alexnet and the image variable is img_var )

alexnet.cuda()
alexnet(img_var)

and it shows the following error :

File “try.py”, line 168, in
alexnet(img_var)

RuntimeError: expected CPU tensor (got CUDA tensor)

Please help :slight_smile: :slight_smile: I want to infer the image on the gpu

You also need to store your tensor into your GPU:

alexnet.cuda()
alexnet(img_var.cuda())

I have done that already. And is it possible that my GPU infer time is very large than the cpu time