I’m trying to use GPU with my code. This is the code about cuda.
model = model.cuda()
model = nn.DataParallel(model, device_ids=range(torch.cuda.device_count)))
But I get
RuntimeError: module mush have its paraeters and buffers on device cuda:0 (device_ids) but found one of them on device:cpu this error.
My Pytorch already know that I have GPU on my computer. I checked it with this code.
>>> import torch
It looks like you are passing the function
device_ids (note the missing parentheses).
Try to call this method to get the device count:
Thanks for your advice but sadly it was just my mistake.
My actual code is
I think I need another adivce.
Could you try to call
DataParallel first, then push the model to the device?
I don’t think it’ll get rid of the error, but it’s the recommended way, if I’m not mistaken.
If that doesn’t help, could you check, if you have
.cuda() calls inside your model?
If so, these will mess up the
DataParallel wrapper, as they force this particular tensor to be pushed to the specified device.