Models on Different GPUs

I wonder if there’s a way to load two (or n) different models on two (or n) different GPUs. For example, if I’d want to load my CNN on GPU device ID 0 and an RNN on GPU device ID 1? Is there a way to do this in Pytorch?

When calling model.cuda() you can pass a device id as argument.
E.g.: cnn.cuda(0) and rnn.cuda(1).

1 Like

I think @apaszke’s answer on a previous post here helps too, and is also closer to what I’m looking for although I’m not looking to split over a model [Model parallelism in pytorch for large(r than 1 GPU) models?](http://Model Parallelism in Pytorch)

@ptrblck is there a less manual way of placing different models on different devices?

You could take a look at native Pipeline Parallel implementation and also check out e.g. Megatron-LM (also available in apex) or DeepSpeed, which apply different parallelism approaches.