Expected object of device type cuda but got device type cpu for argument #1 ‘self’ in call to _thnn_conv2d_forward

Given a tensor loaded with data, I am trying to convert it to CUDA and then forward it through a module that was loaded in CUDA.

For example:

input.to(torch::kCUDA);
vector<torch::jit::IValue> inputs;
inputs.emplace_back(input);  //Input defined at at::Tensor that was .to(torch::kCUDA)
module.to(torch::kCUDA);
result = module.forward(inputs);

This results in

Expected object of device type cuda but got device type cpu for argument #1 ‘self’ in call to _thnn_conv2d_forward
The above operation failed in interpreter.

Do I need to convert the IValue to cuda somehow? Or do the values in IValue hold their original device? When I check the device of the tensor I load into the IValue vector, it is of device CUDA.

Could you try to reassign input via:

input = input.to(torch::kCUDA);

and rerun the code, please?

Thanks this worked.

I’m curious, why is the module to function in place but the input is a return value?

It’s reflecting the Python frontend, which works in the same way.
.to() will be called recursively on all sumbodules and their parameters as well as buffers, which is why you don’t have to reassign modules.

1 Like