Torch.cuda.is_available() always returns false

Hello there! I’m have video card Nvidia GeForce 650M.
Also i have I have the following configuration:

Python version: 3.7.5
CUDA runtime version: 9.0.176
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin\cudnn64_7.dll

Versions of relevant libraries:
[pip] numpy==1.17.4
[pip] torch==1.1.0
[pip] torch-nightly==1.2.0.dev20190723
[pip] torchfile==0.1.0
[pip] torchvision==0.3.0
[conda] blas                      1.0                         mkl    defaults
[conda] cuda92                    1.0                           0    pytorch
[conda] mkl                       2019.0                   pypi_0    pypi
[conda] mkl-service               2.3.0            py37hb782905_0    defaults
[conda] mkl_fft                   1.0.15           py37h14836fe_0    defaults
[conda] mkl_random                1.1.0            py37h675688f_0    defaults
[conda] pytorch                   1.1.0           py3.7_cuda90_cudnn7_1    pytorch
[conda] torch-nightly             1.2.0.dev20190723          pypi_0    pypi
[conda] torchfile                 0.1.0                    pypi_0    pypi
[conda] torchvision               0.3.0               py37_cu90_1    pytorch

And finnaly torch.cuda.is_available() returns false. Looks like my video card is too old and is no longer supported?
How can I solve this?


Indeed, your card is not supported anymore in the binaries.
You can try to compile pytorch from source to be able to use it.

So how i can do that?
What version of libs I should be compile from sources?

You can find instructions on github where you can find both the general and windows specific instructions.
You should be able to compile from master.

But if assemble the library from the master branch, how will it differ from the ready-made compiled solutions?

Each GPU version requires different compiled code. So the binaries we provide do not support all the GPUs because that would make the binaries too big (they are already 500MB+). So we do not provide the least popular versions.
But if you compile from source, it will detect your gpu automatically and make the compiled code for your GPU.

Finally after building from sources I have the following error:

CUDA error: no kernel image is available for execution on the device

Could you post the log of your build here, please?

The topic is no longer relevant. In order to solve the last problem it was necessary to rebuild the torch and set the TORCH_CUDA_ARCH_LIST parametr into Thanks!