How to set model parallel

If you have two different models, you can push them to different GPUs using an id:

model1 = ...
model2 = ...

model1 = model1.to('cuda:0')
model2 = model2.to('cuda:1')

Note that you have to push the outputs etc. also to the appropriate GPU:

output = model1(x) # output is on GPU0
# output should be used in model2 now (which is on GPU1)
output = output.to('cuda:1')
output = model2(output) 
2 Likes