PyTorch won't run via GPU (CUDA)

I’m trying to run the code using the GPU (cuda), but it gives me an error.

I decided to check the result:

As I understand it, I just can’t use the GPU…

Here’s my video card just in case: NVIDIA GeForce GT 630M

Used: nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Mon_Oct_24_19:40:05_Pacific_Daylight_Time_2022
Cuda compilation tools, release 12.0, V12.0.76
Build cuda_12.0.r12.0/compiler.31968024_0


available_gpus = [torch.cuda.device(i) for i in range(torch.cuda.device_count())]
print("List Device:\t", available_gpus)

Output: List Device: []

device = torch.device("cuda" if (torch.cuda.is_available()) else "cpu")
print("Device:\t\t", device)
print("CUDA GPU:\t", torch.cuda.is_available())


Device:          cpu
CUDA GPU:        False


# and


raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Here are some more errors when running other code.:

UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at  ..\c10\cuda\CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0
Warning: caught exception 'CUDA driver initialization failed, you might not have a CUDA gpu.', memory monitor disabled

Please help, explain in detail what to do, specifically for me: “Download this, then write it, install it like this, write it like this, and so on.”

Thank you in advance.

Based on the initial error messages you might have installed the CPU-only binary, but nevertheless your GPU is too old.
The GT 630M is from the Fermi family and uses compute capability 2.1 which is not supported anymore as the PyTorch binaries ship for compute capabilities 3.7-8.6.

And I have no way to use the GPU?

No, since Fermi (compute capability 2.1) was dropped in CUDA 9.0 while we are currently using 11.6-11.8.

I’ve googled that AssertionError, and in every case I’ve seen, it always was triggered by the same command: ‘current_device()’. My install was exactly pasted from the PyTorch installation instructions:

conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia

Could the bug be in ‘current_device()’ instead?

No, the bug is not in any PyTorch-related code and the Fermi CUDA architecture is just not supported anymore as previously described.