RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
How can this be possible if I am using .cpu() to send my inputs to CPU?
I guess you saved the model with cuda tensors, so it will load it into the GPU if you use torch.load(), and when you are feeding it the “CPU”-tensors, the error is raised because the arrays are incompatible.
Basically, how the Torch’s load & save function work is that they don’t do any alterations to the model to keep it simple: CPU models are loaded into the CPU, GPU models are loaded into the GPU.
So in your case, there are 2 options if you want to do the inference on the CPU after training a model on the GPU:
you can call model.cpu() before you save it. This way, it will be automatically loaded into CPU (this is especially useful if you want to load the model on hardware that doesn’t support CUDA later (e.g., most MacBooks)
After you load the model, simply call model.cpu() and then proceed with the code you have.
If @rasbt’s suggestions don’t help, could you try to assign the model to the CPU version?
I’m not sure if this is needed in 0.4.0 but might be worth a try.
? That’s how I typically do it out of habit, but I somewhere read that it should be an in-place operation. The assignment certainly doesn’t hurt though.
Yeah, I mean this line of code.
I can’t currently test it and I know it was an in-place operation in the old versions.
Now I’m wondering, if it’s just a call to tensor.to, which would make the assignment necessary.