I am using python multiprocessing to spawn multiple processes which run on different model objects of their own. I also have multiple GPUs available with me. How can I allocate different GPUs to different processes(as in each model running on separate GPU)? Does Pytorch do this by default or does it run all processes on 1 GPU only unless specified?
PyTorch doesn’t do this by default. You need to specifically specify your desired GPU by
Yeah I tried that. How can I see the available gpu id’s inside Pytorch only though?
Could this work http://pytorch.org/docs/master/cuda.html#torch.cuda.device_count?
when i print torch.cuda.device_count() it shows 4. But when inside a process i do torch.cuda.set_device(1)(it works fine in a single process code), it gives multiprocesssing error(just tells the file name in traceback, nothing else)
Traceback (most recent call last):
File “/tools/anaconda3/envs/py35/lib/python3.5/multiprocessing/process.py”, line 252, in
torch.cuda.set_device(0) works absolutely fine though.
Could you please provide any help in this regard?
Uhh, interesting. How are you doing mp?
@dakshanand you probably didn’t switch to spawn mode. See:
Line 21 - I am using torch.multiprocessing
Line 240 - I am using ‘spawn’ method
Line 162 - torch.cuda.set_device(1) [ If I put 0 here it works fine]
Could you put spawn line right after you import mp, and try again?