AWS NVIDIA Driver Issue with PyTorch 2.2.2 and 2.4.1

I have a repo that was using torch 2.2.2 (CUDA 12.1) hitting a Sagemaker inference endpoint with instance type ml.g4dn.xlarge. NVIDIA SMI tells me this is a Tesla T4 running CUDA 12.4. Cloudwatch is giving me:

`venv/lib/python3.10/site-packages/torch/cuda/__init__.py:141: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11040). Please update your GPU driver by downloading and installing a new version from the URL: ` `http://www.nvidia.com/Download/index.aspx` ` Alternatively, go to: ` `https://pytorch.org` ` to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)`
`  return torch._C._cuda_getDeviceCount() > 0`

I updated the repo’s torch version to 2.4.1+cu124 but am getting the same Cloudwatch message.

Is there something I’m missing?

Your NVIDIA driver is too old so either install a PyTorch binary with CUDA 11 runtime dependencies or update your driver.