Does torchvision.models.resnet not work on GPU?

Hi, I tried to use resnet from torchvision.models on GPU, then I got an error below. Does anyone know solutions? I’m using the latest PyTorch and torchvision in conda.


  • code
from torchvision.models import resnet18
model = resnet18()
model.cud()
input = Variable(input).cuda()
output = model(input)
  • output
Traceback (most recent call last):
  File "tinyimagenet.py", line 82, in <module>
    train(model, optimizer, train_loader)
  File "tinyimagenet.py", line 35, in train
    output = model(input)
  File "/home/user/.anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/.anaconda/lib/python3.6/site-packages/torchvision-0.1.8-py3.6.egg/torchvision/models/resnet.py", line 139, in forward
  File "/home/user/.anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/.anaconda/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 237, in forward
    self.padding, self.dilation, self.groups)
  File "/home/user/.anaconda/lib/python3.6/site-packages/torch/nn/functional.py", line 40, in conv2d
    return f(input, weight, bias)
RuntimeError: expected CPU tensor (got CUDA tensor)

Thank you.

sorry, this issue has already been issued (https://github.com/pytorch/pytorch/issues/1472 )

In the real code, I forgot to do input = Variable(input).cuda() but did just Variable(input).cuda().

1 Like