How to load models on multiple gpus and forward() it?

I load my 2 model on gpu1 and gpu2. current_device is set on gpu1

then I can forward model on gpu1 but cannot model on gpu2 with this error

RuntimeError: all tensors must be on devices[0]

After I change the current device to gpu1 with this line, the same error occurs


did you follow PyTorch Data Parallelism tutorial?

@mmisiur yes, but I think my use case is a little bit different.
I want to load some models to multiple gpus respectively and run each model on its gpu.
Thanks for reminding it.

Ok, right. So it’s hard to say what is wrong without your code. But if I understand what you want to do (load one model on one gpu, second model on second gpu, and pass some input through them) I think the proper way to do this, and one that works for me is:

# imports
import torch

# define models
m0 = torch.nn.Linear(10,5)
m1 = torch.nn.Linear(10,5)

# define devices
d0 = torch.device("cuda:0")
d1 = torch.device("cuda:1")

# define tensors
t0 = torch.rand(10)
t1 = torch.rand(10)

# move to devices
t0 =
t1 =
m0 =
m1 =

# forward pass
out0 = m0(t0)
out1 = m1(t1)

Oh I just noticed that I didn’t move tensors…

1 Like