If I only have one gpu does doing either of the below mean that the same gpu will be used?
device = torch.device('cuda:0')
device = torch.device('cuda')
Thanks!
If I only have one gpu does doing either of the below mean that the same gpu will be used?
device = torch.device('cuda:0')
device = torch.device('cuda')
Thanks!
Yes ! The device index is only useful when you have multiple gpus.
Thanks and if I do torch.device(‘cuda’) with multiple GPUs this is the same as doing torch.device(‘cuda:0’)?
I think so. By default, torch.device(‘cuda’) refers to GPU index 0. Similarly, tensor.cuda() and model.cuda() move the tensor/model to “cuda: 0” by default if not specified.
Right, so by default doing torch.device(‘cuda’) will give the same result as torch.device(‘cuda:0’) regardless of how many GPUs I have?
Yes and if you want using multi GPUs you shall check with model.DataParallel() and model.to(device).
And assuming I have at least 1 GPU. By default it doesn’t matter if I use device=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
or device=torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
the same GPU will be used regardless of how many GPUs I have?