How to push data and model onto the same GPU if using multiple GPUs via nn.DataParallel(model)

Hello,

I have a CNN code which works fine on CPU but doesn’t on multiple GPUs.

The error message is:
RuntimeError: Expected tensor for argument #1 ‘input’ to have the same device as tensor for argument #2 ‘weight’; but device 1 does not equal 0 (while checking arguments for cudnn_convolution)

Except using the following two lines in my code, I didn’t do anything specifically for the multiple GPU.
model = nn.DataParallel(model)
data, label = data.to(device), label.to(device)

I have searched around. It seems it is because the input data are not on the same GPU as the model. So in my case, how to push the data and model on the same GPU? Many thanks in advance.

You can try the following if you want to have both data and label on the same device.

device = torch.device("cuda:0")
data, label = data.to(device),  label.to(device)

Hi Sayed,

Thank you for the reply. I am using multiple GPUs, and because it is on server, every time, I got different server. So I don’t necessarily have GPU0 to use. Also, the message shows that the input and weight are not on the same device. How does this thing happen? And what does device 1 for and why it should be 0?