Hi,
I was reading the docs on CUDA semantics, where the following snippet is available:
cuda = torch.device(‘cuda’) # Default CUDA device
cuda0 = torch.device(‘cuda:0’)
cuda2 = torch.device(‘cuda:2’) # GPU 2 (these are 0-indexed)
I am not clear on the indexing here.
Basically, I think I understand that we can create devices with torch.device
and that all of our GPUs will be devices of type cuda
and be given an index (always starting at 0). In this example, I understand the lines that follow (and which I omitted) regarding the use of the context manager, etc. But I don’t understand the indexing above.
In this example,
- Are there two or three GPUs available?
- Why is the default device (
cuda
) not be given an index? - Is it the same as the device
cuda:0
? - Is there anything special about the default device?
- Is there always a default device even if I give all devices an index?
- Why is there no
cuda:1
? - Also, is it correct that the indexing in Pytorch (starting always at zero) is independent of the available device ids that the environment variable CUDA_AVAILABLE_DEVICES holds?
If someone could clear up these questions, or just shed some light on the naming and indexing of devices, I’d be very grateful.
Best wishes,
Max