How can i select certain GPUs with DataParallel?

Hello, PyTorch!

Now i have totally 8 GPUs in my workstation and i want to train LSTM model with 2 GPUS out of 8.
(Since 6 GPUs, the rest of total one is occupied.)

Please let me know how i can select two GPUs with DataParallel.

Thanks, PyTorch!

Have a look at this solution, it’s the preferred approach!

The other solution is to specify which GPUs to use with PyTorch internal functions like torch.cuda.set_device(), but it’s not recommended, I think it’s because there might be some index mismatch between nvidia-smi and the internal values… Most likely other reasons too.

Hi alex.veuthey
Thank you for your help.

Can i just solve this problem with “nn.DataParallel(model, device_ids=[0,1])”??

Yes, probably, but there might be the issue of indexing mismatch between PyTorch and nvidia-smi. Try and see!