Limit process to single GPU

Hi all,

I have a setup with 4 GPUs. When now working with multiple processes in PyTorch, Is there a way to enforce that a process only accesses a given, single gpu, therefore limiting the CUDA driver context to be present only once per process?

Thanks in advance for your help,

Benjamin

Hi @besterma,

Sure, you can do it with the env variable CUDA_VISIBLE_DEVICES.

E.g. to use GPU 0 and 2:

CUDA_VISIBLE_DEVICES=0,2 python pytorch_script.py

and in your case you have to give a different env variable to each process.

1 Like

Thank you @spanev!

In case anyone is wondering, here is how to set process specific env variables:

import torch.multiprocessing as _mp
import torch
import os

mp = _mp.get_context('fork')


class Process(mp.Process):
    def __init__(self):
        super().__init__()
        print("Init Process")
        return

    def run(self):
        print("Hello World!")
        os.environ['CUDA_VISIBLE_DEVICES'] = '1'
        print(torch.cuda.device_count())
        print(torch.cuda.current_device())

if __name__ == "__main__":
    num_processes = 1
    os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'
    processes = [Process() for i in range(num_processes)]
    [p.start() for p in processes]
    print("main: " + os.environ['CUDA_VISIBLE_DEVICES'])
    [p.join() for p in processes]

It is important to set it in the run method of the process, as the init method is still called in the main process, therefore setting the env vars of the main process when set there.

1 Like