Difference between torch.device("cuda") and torch.device("cuda:0")

and I am still getting the same RuntimeError: CUDA error: invalid device ordinal on running the above method

If you want to only use DataParallel if there are multiple GPUs, you can do:

model = model.cuda(0)
if torch.cuda.device_count() > 1:
  model = nn.DataParallel(model) # Note that the default gpus here will use all available gpus.

See the DataParallel doc for more details on the default arguments.

1 Like

use this to get a the device available:

use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
1 Like

How to connect torch with tpu on google colab.

Hi,

You can find all the info on the corresponding pytorch repo: https://github.com/pytorch/xla