Cuda versioning and pytorch compatibility

hi everyone,

I am pretty new at using pytorch. Currently, I have been trying to understand the concepts of using CUDA for performing better loading data and increasing speed for training models.

I took a look into my system, I currently have an NVIDIA GTX1650 that contains CUDA v-11, yet I see that hasn’t been installed. Normally, when I work in python, I use virtual environments to set all the libraries I use for a project. With pytorch, I saw you can run on the CPU or use CUDA.

Currently, the latest version is pytorch 2.1.0 which goes until CUDA 11.8 or 12.1. I may have a couple of questions regarding how to properly set my graphics card for usage.

1.) Since the drivers say the latest version is CUDA 11. Does that mean I have to download the 11.0 CUDA from NVIDIA? Since other versions would not work?

2.) Pytorch versioning must also meet the highest available CUDA version? In other words, downgrade pytorch 2.1.0?

3.) Is there a better way than installing for local venvs? (Conda for example).

Thank you so much for your time.

  1. No, you don’t need to download a full CUDA toolkit and would only need to install a compatible NVIDIA driver, since PyTorch binaries ship with their own CUDA dependencies. Your current driver should allow you to run the PyTorch binary with CUDA 11.8, but would fail to run the binary with CUDA 12.1 Update 1 as it’s too old.

  2. I don’t understand this question, since PyTorch 2.1.0 is the latest release so what do you want to downgrade from?

  3. You don’t need to install virtual environments if you want to use a single PyTorch version only.


Thank you for the quick response!

For the second question. I meant to go from PyTorch 2.1.0 to (for instance). To 1.8.0.

Thank you !

Thanks for clarifying. No you don’t need to downgrade PyTorch and can use the latest 2.1.0+cu118 release with your driver.