How to use multi-gpus in Libtorch?

Does anyone has example?

@PistonY
Yes, you can. You can create a TensorOptions obj by passing both the device type and its device index, the default is -1 which means pytorch will always use the same single device. You can explicitly specify this (0,1,etc)
Here is an example:
/https://github.com/pytorch/pytorch/blob/5fd037ce4450d2a7bb477fa0f58677d7b256fdfe/test/cpp/api/tensor_cuda.cpp#L16

Thanks for reply.That’s mean if I want to deploy using libtorch I need to write all the code by myself like what nn.DataParallel do?

@PistonY
I don’t think so. Since what you need to do is to mark your tensor (input and output) which gpu you want it to be allocated/processed on. It is not related to the model.

The model need not to put on other devices?

Hi. Is there any update regarding this question?
@glaringlee