CUDA Semantics Docs - Indexing of GPUs


I was reading the docs on CUDA semantics, where the following snippet is available:

cuda = torch.device(‘cuda’) # Default CUDA device
cuda0 = torch.device(‘cuda:0’)
cuda2 = torch.device(‘cuda:2’) # GPU 2 (these are 0-indexed)

I am not clear on the indexing here.
Basically, I think I understand that we can create devices with torch.device and that all of our GPUs will be devices of type cuda and be given an index (always starting at 0). In this example, I understand the lines that follow (and which I omitted) regarding the use of the context manager, etc. But I don’t understand the indexing above.

In this example,

  • Are there two or three GPUs available?
  • Why is the default device (cuda) not be given an index?
  • Is it the same as the device cuda:0?
  • Is there anything special about the default device?
  • Is there always a default device even if I give all devices an index?
  • Why is there no cuda:1?
  • Also, is it correct that the indexing in Pytorch (starting always at zero) is independent of the available device ids that the environment variable CUDA_AVAILABLE_DEVICES holds?

If someone could clear up these questions, or just shed some light on the naming and indexing of devices, I’d be very grateful.

Best wishes,

1: there are 3 GPUs available and visible (indices 0,1,2).
2: because otherwise it would not be the default device anymore but a specific one.
3: usually yes but not necessarily. You can set the default device to another index if you want to.
4: despite the missing index: no
5: yes. If you don’t call but x.cuda() without specifying an index the default GPU will be used again.
6: probably simply left out to provide a compact example.
7: yes. If you set CUDA_VISIBLE_DEVICES to 5,6 and 7 cuda:0 will be GPU 5 and so on.

1 Like

Thank you, it’s clearer now.