Torch.cuda.is_available() is TRUE But No CUDA GPUs are available

Hi,
I am having an issue while running my script inference.py Please see the screenshot.

I am using torch==2.1.0 and my Nvidia configurations are nvidia-cublas-cu12==12.1.3.1 nvidia-cuda-cupti-cu12==12.1.105 nvidia-cuda-nvrtc-cu12==12.1.105 nvidia-cuda-runtime-cu12==12.1.105 nvidia-cudnn-cu12==8.9.2.26 nvidia-cufft-cu12==11.0.2.54 nvidia-curand-cu12==10.3.2.106 nvidia-cusolver-cu12==11.4.5.107 nvidia-cusparse-cu12==12.1.0.106 nvidia-nccl-cu12==2.18.1 nvidia-nvjitlink-cu12==12.3.52 nvidia-nvtx-cu12==12.1.105

I do not have any idea why I am getting this error although torch.cuda.is_available() is returning True and my torch.version.cuda is 12.1.

I need expert help. Thank you.

Could you verify that the the torch used by the fairseq library is the same as the one in your interpreter (e.g., via __path__) as described here: Cuda.is_available returns True in consel but False in program - #2 by ptrblck?

fairseq is using torch==1.3.1 version, But while I am trying to install this version I am getting the error ERROR: No matching distribution found for torch==1.3.1 and when using wheel getting ERROR: torch-1.3.1-cp37-cp37m-manylinux1_x86_64.whl is not a supported wheel on this platform. error.

What does it mean?

Your Python version is 3.10 based on the screenshot while you are trying to install a binary built for 3.7.

is there any version available for 3.10 or should I change my python version to 3.7?