Multi GPU - GPU index list

Hi all,

I have two GPU and I’d like to use DataParallel.

numb_gpu= list with GPU indexes
# if numb_gpu> 1:
	print("Let's use", numb_gpu, "GPUs!")
# 	os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(str(x) for x in numb_gpu)

is there a method to get the gpu index instead of the total number of gpu?

Do you mean to get the GPU index of the currently used device? In this case you are looking for torch.cuda.current_device().
If you want all the GPU indexes you can do something like range(numb_gpu) as NVIDIA GPUs are numbered from 0 to N-1 (with N GPUs).

As an additiona note I don’t understand your code, because numb_gpu is an int but you are iterating on it with a for loop, so this should throw an error.

yes, it does. The thing that I need is a list with all GPU indexes. I do not know if is there a function to return a list with all the GPU indexes?

If you wan to use it in order to set the environmental variable you can use it like this:

numb_gpu = list(range(torch.cuda.device_count())

So if you have 4 GPUs you obtain numb_gpu = [0,1,2,3].

1 Like