Train model by using a specific GPU

I have two GPUs, and GPU 0 is in using. So I want to train my model on GPU 1. However, I’ve tried to use

os.environ["CUDA_VISIBLE_DEVICES"]="1" in my script,

and

CUDA_VISIBLE_DEVICES=1

both not work.

2 Likes

The program just keeps running on GPU 0……
anyone has solution???

Hi,

Setting the env variable within python won’t work, it needs to be set in the shell before you run your script.
Also, this might help as well.

I’ve already tried CUDA_VISIBLE_DEVICES=1 python main.py
not work.

1 Like

Hey,
I’m not sure if this will be helpful or not but if you use pytorch 0.3.1 you can direct your model to run on a specific gpu by using model.cuda(_GPU_ID) #_GPU_ID should be 0, 1, 2 etc

if you are using pytorch 0.4 you can direct your model to run on a specific gpu by using

device = torch.device("cuda:1")
model.to(device)
1 Like

Does CUDA_VISIBLE_DEVICES=0 python main.py with a 0 makes it run on GPU 1?

1 Like

That’s not completely true. It works setting the variable inside the python script. But it has to be set before the first import of pytorch or other modules using pytorch (and other kinds of GPU-processing as in other DL_libraries like keras or tensorflow).

At least this is what I experienced on a GPU-Cluster running Linux. On Windows however this seems to work only in some cases

Yes it is possible to set it within python but it has to be visible and be done before the cuda init.
The thing is that it can be tricky for a user to know when the cuda init is going to happen so for my scripts, I assume that it does not work and set it outside, that way I’m sure it always works, even if an obscure dependency changed and now initialize cuda before I set the variable.

2 Likes

I ran into the same problem as LJ_Mason. Any update (preferably with an example) on how to choose a specific GPU for running a particular python/pytorch script in Jupyter Notebook? Thanks.