CUDA Error after calling torch.cuda.is_available()

Hi all,

I experiencing a strange error when trying to generalize my training loop for different devices. I have something to see if cuda is available and to do the device setup, however after I call torch.cuda.is_available() moving tensors to the GPU generates a cuda error. I did some testing and found this doesn’t happen if I try to make a cuda tensor before calling torch.cuda.is_available(). Here’s an example of what I mean:

>>> import torch
>>> torch.cuda.is_available()
>>> torch.randn(1,1).to("cuda")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\Mike\Anaconda3\lib\site-packages\torch\cuda\", line 163, in _lazy_init
RuntimeError: CUDA error: unknown error

Restarting the python shell

>>> import torch
>>> torch.randn(1,1).to("cuda")
tensor([[-0.2594]], device='cuda:0')
>>> torch.cuda.is_available()
>>> torch.randn(1,1).to("cuda")
tensor([[0.4193]], device='cuda:0')

I restarted my computer and updated the Nvidia drivers but still encounter this error. I’m using:
Python 3.7.3
PyTorch 1.1.0
Cuda 10.0
GeForce driver 430.86

Would anyone have an idea of what could be the cause of this issue? Please let me know if I left any relevant details out and thanks for your time. Cheers

Do you have any errors when you do nvidia-smi and nvcc -V ? Did you install PyTorch with the correct version of CUDA ?

Thanks for the response, it suddenly started working as intended. It’s strange because I didn’t even restart between then and now or install anything new. Unfortunate if someone else has a similar issue but them’s the breaks I guess. Cheers

1 Like