RuntimeError: module mush have its paraeters and buffers on device cuda:0 (device_ids[0]) but found one of them on device:cpu

Hi.

I’m trying to use GPU with my code. This is the code about cuda.

model = model.cuda()
model = nn.DataParallel(model, device_ids=range(torch.cuda.device_count)))

But I get RuntimeError: module mush have its paraeters and buffers on device cuda:0 (device_ids[0]) but found one of them on device:cpu this error.

My Pytorch already know that I have GPU on my computer. I checked it with this code.

>>> import torch
>>> torch.cuda.device_cound()
1
1 Like

It looks like you are passing the function torch.cuda.device_count to device_ids (note the missing parentheses).
Try to call this method to get the device count: device_ids=torch.cuda.device_count().

Thanks for your advice but sadly it was just my mistake.
My actual code is device_ids=torch.cuda.device_count().
I think I need another adivce.

Could you try to call DataParallel first, then push the model to the device?
I don’t think it’ll get rid of the error, but it’s the recommended way, if I’m not mistaken.

If that doesn’t help, could you check, if you have to(device) or .cuda() calls inside your model?
If so, these will mess up the DataParallel wrapper, as they force this particular tensor to be pushed to the specified device.

1 Like