How to specify gpu using pytorch c++ api

I have read the simple tutorial (https://pytorch.org/tutorials/advanced/cpp_export.html). And the doc just tells me using the following code to using gpu:

model->to(at::kCUDA);

However, I have several gpus on my server, and I want to use the specific gpu, for example gpu 0. Just like the following python code:

device = torch.device("cuda:0")
model.to(device)

Could you give me some advice? Thanks a lot.

After trying, I find following code works well.

module->to(at::Device("cuda:0"));