How to use multi-gpus in Libtorch?

Does anyone has example?

Yes, you can. You can create a TensorOptions obj by passing both the device type and its device index, the default is -1 which means pytorch will always use the same single device. You can explicitly specify this (0,1,etc)
Here is an example:

Thanks for reply.That’s mean if I want to deploy using libtorch I need to write all the code by myself like what nn.DataParallel do?

I don’t think so. Since what you need to do is to mark your tensor (input and output) which gpu you want it to be allocated/processed on. It is not related to the model.

The model need not to put on other devices?

Hi. Is there any update regarding this question?