CUDA not available after installing PyTorch v1.8.0 via conda

Greetings,
I am building a fine-grained image classifier with an already-implemented architecture which requires PyTorch v1.8.0. After following the installation tutorial in Previous PyTorch Versions | PyTorch and running:

conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge

Everything installs correctly but PyTorch is not compiled for GPUs and torch.cuda.is_available() returns False even though I have a CUDA-supported GPU. My specifications are as follows:

  • Pop!_OS 22.04 LTS 64-bit
  • NVIDIA Corporation GA106M [GeForce RTX 3060 Mobile / Max-Q] / RENOIR (renoir, LLVM 15.0.7, DRM 3.54, 6.5.6-76060506-generic)
  • Driver Version: 535.113.01 CUDA Version: 12.2
  • conda 23.10.0
  • Python 3.7.0

I have already tried deleting and creating the environment, cleaning Conda packages, and deleting all environments. It is worth mentioning that I do not possess a system-wide CUDA installation.

Try to update to the latest stable or nightly PyTorch release as 1.8.0 is quite old by now.

Hello,
While I totally agree that using newer versions is preferable, the architecture mentioned explicitly requires PyTorch v1.8.0, thus the reasoning behind this post.

Do you know what exactly is required from 1.8.0 as updating the code might not be too hard?
I.e. did you try to run the code using the latest release and did you see errors you could post here?