Torch.cuda.is_available() returns False, but cudnn is present

So, since recently torch.cuda.is_available() return False, but torch.backends.cudnn.enabled gives True.
nvidia-smi says driver version is 417.98, cuda version is 10.0.
Other version info:

  • Windows 10

  • pytorch 1.3.1

  • cuda toolkit 10.1.243

  • cudnn 7.6.5

It used to work properly till recently. I don’t see what is the problem now.

Do you know what changed in your setup, which might have broken the working version?
Did you install any new drivers etc.?

The local CUDA installation won’t be used, if you install the binaries with cudatoolkit.
Based on your information, it seems you have a local CUDA10.0 installation, while the PyTorch binaries ship with CUDA10.1.
Based on this table you would need an NVIDIA driver of >= 418.39 to use CUDA10.1.
Did you update PyTorch and swapped CUDA10.0 for 10.1?
If so, you would have to update the driver.

I updated driver to version 441.66 and now it works. It seems that some 10 days ago Windows update downgraded driver version for some reason, perhaps because of Cuda version being 10.0. Now that is also updated to 10.2, so hopefully it should stay fine.