C++ model inference on GPU

Hi,
I am following the tutorial on loading a traced model and running forward pass in C++, using the preview version (https://pytorch.org/tutorials/advanced/cpp_export.html)

Running this on CPU looks promising but when I try running on the GPU I get this error from the module->forward() pass:

"Input type (Variable[CUDAFloatType]) and weight type (Variable[CPUFloatType]) should be the same…

It seems obvious that the weights need to be of type “CUDA”. In python I would have done .cuda() on the model, but I cannot figure out how this is done in the C++ API?
I am far from an expert here…