PyTorch CUDA Availability Issue on NVIDIA GeForce MX450

Hi there.
Despite having a CUDA-enabled version of PyTorch installed, torch.cuda.is_available() is returning False.
Could I please seek your assistance with this?
Thanks in advance.

import torch
print("PyTorch version:", torch.__version__)
print("CUDA version:", torch.version.cuda)
print("Is CUDA available:", torch.cuda.is_available())

When I execute the Python code above in JupyterLab notebook, I get the following output:

  • PyTorch version: 2.2.2+cu121
  • CUDA version: 12.1
  • Is CUDA available: False

According to the Device Manager on my personal laptop, I have the following specifications:

  • GPU: NVIDIA GeForce MX450
  • Driver date: August 1, 2021
  • Driver version: 27.21.14.5774

It also states that “The best drivers for your device are already installed”.

This does not seem to be an NVIDIA driver based on the version, so you might need to reinstall it.

Thanks @ptrblck for your reply.
The left snapshot displays the system information from the NVIDIA Control Panel on my laptop. I’ve tried to download several NVIDIA drivers from https://www.nvidia.com/Download/Find.aspx?lang=en-us# (middle snapshot), but none of them have passed the compatibility check so far (right snapshot). Additionally, torch.cuda.get_arch_list() returns an empty list [].
I’m unsure what I might have done incorrectly.

Once you’ve properly installed a driver, double check that a single PyTorch binary with CUDA support is installed since the current one does not seem to ship with any CUDA kernels and thus returns an empty architecture list.

Hi @ptrblck. Thanks again for your response.

I uninstalled the current version of PyTorch and reinstalled it using the following command:

!pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

I followed the instructions provided at Start Locally | PyTorch.

The following Python code returns True:

import torch
torch.cuda.is_available()

Additionally, when I execute the following command:

torch.cuda.get_arch_list()

I receive the following output:

['sm_37', 'sm_50', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90', 'compute_37']
print(torch.cuda.get_device_name(0))  # Returns 'GeForce MX450'

I believe everything is working correctly now.

Yes, now it looks good. Run a quick test by allocating a random tensor on the GPU via torch.randn(1).cuda() and make sure you are able to allocate this tensor.

Hi @ptrblck .
When I run torch.randn(1).cuda(), it returns tensor([0.1023], device='cuda:0').
Thanks for the advice!