RuntimeError: CUDA error: invalid device ordinal

import torch
import time
print(torch.version)
print(torch.cuda.is_available())

a = torch.randn(10000,1000)
b = torch.randn(1000,2000)

device = torch.device(‘cuda:1’)
a = a.to(device)
b = b.to(device)

start_time = time.time()
c = torch.matmul(a,b)
end_time2 = time.time()
print(a.device,end_time2-start_time,c.norm(2))

Why does this code generate such an error? When I call is available, the output is true, my CUDA version is 10.1, and pytorch version is 1.3.1. I hope to get your help

CUDA devices start counting at 0. You need to use torch.device('cuda:0') or better yet torch.device('cuda'), which uses the currently selected device instead of hard-coding it.

1 Like

Hi, what if there are 2 GPUs and I want to use the second one?

1 Like

Use 'cuda:1' if you want to select the second GPU while both are visible or mask the second one via CUDA_VISIBLE_DEVICES=1 and index it via 'cuda:0' inside your script.

2 Likes

I have Number of available CUDA devices: 1 in google colab but it is showing the error
RuntimeError: CUDA error: invalid device ordinal
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.