How to load model on specific device in libtorch?

I want to load model on specific device (cpu or cuda) with libtorch, just like torch.jit.load(’./model.pt’, map_location=torch.device(‘cpu’)) in Python.
I didn’t find the args in CPP: torch::jit::load(), it just takes one input as model path.
Is there a function or arg to specify device just like map_loaction in libtorch?
Thanks!

torch.jit.load takes a simplified map_location argument (so a device), which translates to the optional device argument in torch::jit::load.

Best regards

Thomas

Yes, the torch::jit::load() takes an arg as device. I tried this:

device = torch::kCUDA;
my_model = torch::jit::load(model, device);

but when it runs to my_model.forward(inputs), the error occurs (for example):

 The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/torch/nn/modules/module/___torch_mangle_498.py", line 19, in forward
    input0: Tensor) -> Tensor:
    _0 = self.featneck_encode
    _1 = (self.bert).forward(attention_mask, visual_attention_mask, input_ids, input, feats, input0, )
          ~~~~~~~~~~~~~~~~~~ <--- HERE
    _2 = (_0).forward(_1, )
    _3 = torch.clamp_min(torch.norm(_2, 2, [1], True), 9.9999999999999998e-13)
  File "code/__torch__/torch/nn/modules/module/___torch_mangle_471.py", line 25, in forward
    _5 = torch.to(extended_visual_attention_mask, 6, False, False, None)
    attention_mask1 = torch.mul(torch.rsub(_5, 1., 1), CONSTANTS.c0)
    _6 = (_1).forward(feats, input0, (_2).forward(input_ids, input, ), attention_mask0, attention_mask1, )
                                      ~~~~~~~~~~~ <--- HERE
    return (_0).forward(_6, )
  File "code/__torch__/torch/nn/modules/module/___torch_mangle_4.py", line 22, in forward
    input0 = torch.expand_as(torch.unsqueeze(position_ids, 0), input_ids)
    _5 = (_4).forward(input_ids, )
    _6 = (_3).forward(input0, )
          ~~~~~~~~~~~ <--- HERE
    _7 = (_2).forward(input, )
    x = torch.add(torch.add(_5, _6, alpha=1), _7, alpha=1)
  File "code/__torch__/torch/nn/modules/module/___torch_mangle_0.py", line 8, in forward
  def forward(self: __torch__.torch.nn.modules.module.___torch_mangle_0.Module,
    input: Tensor) -> Tensor:
    position_embeddings = torch.embedding(self.weight, input, 0, False, False)
                          ~~~~~~~~~~~~~~~ <--- HERE
    return position_embeddings

Traceback of TorchScript, original code (most recent call last):
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/functional.py(1484): embedding
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/sparse.py(114): forward
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py(516): _slow_forward
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py(530): __call__
/Users/xxxx/Documents/code/huanyi_tbmultimodal_inference2/lxrt/modeling.py(339): forward
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py(516): _slow_forward
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py(530): __call__
/Users/xxxx/Documents/code/huanyi_tbmultimodal_inference2/lxrt/modeling.py(1161): forward
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py(516): _slow_forward
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py(530): __call__
/Users/xxxx/Documents/code/huanyi_tbmultimodal_inference2/lxrt/modeling.py(1270): forward
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py(516): _slow_forward
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py(530): __call__
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/jit/__init__.py(1034): trace_module
/Users/xxxx/opt/anaconda3_py36/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/jit/__init__.py(882): trace
/Users/xxxx/Documents/code/huanyi_tbmultimodal_inference2/relevance_embeding.py(200): <module>
**RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select**

So I think the model was not loaded into GPU directly.
How to avoid this error?

The obvious imperfect method could be to move the model to cuda…

I tried my_model.to(torch::kCUDA) but not working. The error still occurs.

Did you ever solve this? I have no earthly idea how to get a LibTorch module onto CUDA in C++.
I mean, where in the code do you put the ->to(device) or .to(device) behind the model?