C++ Torchscript: Running ResNet18 TorchScript model in GPU

Hi all,

I am new to the C++ API and was following the tutorial on: https://pytorch.org/tutorials/advanced/cpp_export.html

I am able to successfully save the model in python, import the serialized script, and run inference on the model. The example loads from the CPU. Using some of the other C++ code examples as a reference, I loaded the model onto the GPU:

torch::DeviceType device_type;
  if (torch::cuda::is_available()) {
      std::cout << "CUDA available! Training on GPU." << std::endl;
      device_type = torch::kCUDA;
  }
  else {
      std::cout << "Training on CPU." << std::endl;
      device_type = torch::kCPU;
  }
  torch::Device device(device_type);
  torch::jit::script::Module module;
  module = torch::jit::load(path_to_resnet_model);
  module.to(at::kCUDA);

However, I am having trouble loading the tensors to the GPU. The example code loads the data onto the CPU:

std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({ 1, 3, 224, 224 }));

What is the best way to put this tensor input to the GPU, and then set up inference so that the output is also a GPU tensor?

Thanks in advance.