Given a tensor loaded with data, I am trying to convert it to CUDA and then forward it through a module that was loaded in CUDA.
For example:
input.to(torch::kCUDA);
vector<torch::jit::IValue> inputs;
inputs.emplace_back(input); //Input defined at at::Tensor that was .to(torch::kCUDA)
module.to(torch::kCUDA);
result = module.forward(inputs);
This results in
Expected object of device type cuda but got device type cpu for argument #1 ‘self’ in call to _thnn_conv2d_forward
The above operation failed in interpreter.
Do I need to convert the IValue to cuda somehow? Or do the values in IValue hold their original device? When I check the device of the tensor I load into the IValue vector, it is of device CUDA.