How to set model parallel

I got an problem while set an transfer learning task. In my task, the source model and dst model couldn’t run in an Graphics, so is there any function or some method set the model run in different Graphics.

If you have two different models, you can push them to different GPUs using an id:

model1 = ...
model2 = ...

model1 = model1.to('cuda:0')
model2 = model2.to('cuda:1')

Note that you have to push the outputs etc. also to the appropriate GPU:

output = model1(x) # output is on GPU0
# output should be used in model2 now (which is on GPU1)
output = output.to('cuda:1')
output = model2(output) 
2 Likes