How to change the default device of GPU? device_ids[0]

Thanks for your reply, I will do some experiments to verify these functions. Besides, I found out some other useful functions at How to specify GPU usage?.

I have tried to set CUDA_VISIBLE_DEVICES in shell, then I run a simple script to test if the setting has taken effect, unfortunately, it seems does not work.


the script is

import torch

the above script still shows that current device is 0.

I find that torch.cuda.set_device(device_num) works fine in setting the desired GPU to use.


two things you did wrong:

  1. there shouldn’t be semicolon. with the semicolon, they are on two different lines, and python won’t see it.
  2. even with the correct command CUDA_VISIBLE_DEVICES=3 python, you won’t see torch.cuda.current_device() = 3, because it completely changes what devices pytorch can see. So in pytorch land device#0 is actually your device#3 of the system. You can verify that from nvidia-smi.

Thanks for the information. I thought that PyTorch would print the actual GPU id even if we use CUDA_VISIBLE_DEVICES to set available GPU.

That controls what devices CUDA exposes and PyTorch can’t do nothing in this regards.

Hi, you can specify used gpu in python script as following:

import os
from argparse import ArgumentParser

parser = ArgumentParser(description=‘Example’)
parser.add_argument(’–gpu’, type=int, default=[0,1], nargs=’+’, help=‘used gpu’)

args = parser.parse_args()
os.environ[“CUDA_VISIBLE_DEVICES”] = ‘,’.join(str(x) for x in args.gpu)


it cannot work for me, it always use the first(ie, 0) gpu

Thanks, this is the easiest way to solve this problem.

It shouldn’t happen. That is a CUDA flag. Once set, PyTorch will never have access to the excluded device(s).

I change the place of that and it worked,thanks for your reply!

获取 Outlook for iOS

or torch.cuda.set_device(device_id)


This is a very useful solution, especially you are going to run with someone else’s code without specifying the cuda id.

According to the tutorial, it’s better to use environment variable :slight_smile:


Sets the current device.

Usage of this function is discouraged in favor of device. In most cases it’s better to use CUDA_VISIBLE_DEVICES environmental variable.

Parameters:	device (torch.device or int) – selected device. This function is a no-op if this argument is negative.

Great answer !!! It helps me a lot.

I tried


and it doesn’t work for me.



does work for me.


The following works properly.

import os


Usually GPU numbers start from 0. Since you have issue with the default device, try anything other that 0.


Thanks a lot for this solution!

Thank you, life saver

Just want to add to this answer, this environment variable should be set at the top of the program ideally. Changing the CUDA_VISIBLE_DEVICES var will not work if it is called after setting torch.backends.cudnn.benchmark.
This might also be true for other torch/cuda related calls as well so it’s better to set the environment vars at the program start or use export CUDA_VISIBLE_DEVICES="NUM" before starting the program.

1 Like

Many thanks! It’s really helpful!