Input Type and WeightType mismatch in moving GPU model to CPU

Everything is fine, including loading the model…except the evaluation. I am still getting the same error.

for param in model.parameters():
  print(param.data.type())

#OUTPUT: torch.FloatTensor
torch.FloatTensor
torch.FloatTensor
torch.FloatTensor

My evaluation is

x = data[:2,::]
x.to('cpu')
#output: 'torch.DoubleTensor'
y = model(x.type(torch.FloatTensor))

so I have really no clue where it is going wrong… I think the model still expects the input tensors to be on GPU, which is confusing since the model has been explicitly loaded on to the CPU. I am using Pytorch 1.0.1

EDIT: I’ve solved the issue…turns out my custom NN was explicitly calling .cuda() for the input tensor. Thanks for all the help @MariosOreo! Definitely gave me some clarity.

1 Like