Install pytorch with Cuda 12.1

Hi, Could you please guide me on this. I am using WSL2 on Windows11 and I prefer to install on Ubuntu. But on Powershell will also work for me. I have spent almost two weeks on this.Did multiple reinstallation of OS as well as NVidia cuda tool kit but of no success. I need to use my GPU for running my llama applications
Thnx,
Dilip

Hi, Could you please guide me on installing cuda enabled pytorch for windows powershell or on wsl2. I also made my laptop dual bootable with Ubuntu 20.04 But in all cases I failed to use pytorch with cuda enabled. I have spent almost two weeks on this.Did multiple reinstallation of OS as well as NVidia cuda tool kit but of no success. I need to use my GPU for running my llama applications
Thnx,
Dilip

You would need to install the NVIDIA drivers properly so that WSL2 can detect these (refer to any guide as this is unrelated to PyTorch). Once this is done, select the desired PyTorch setup from here, copy the install command, and paste it into your terminal.

Thanks for your quick response. I formated my laptop and reinstalled windows and wsl2. Now I can see the pytorch is detectingthe cuda. However, when I run the python program for Large language model. The program uses only CPU and GPU is not used.

This seems to be unrelated to the install process, so feel free to create a new topic describing the issue ideally with a minimal and executable code snippet showing the GPU is not used.

Hi, I wanted to know which cuda version of torchaudio is compatible with Pytorch cuda version 12.1
Can anyone help me with this?

The same stable and nightly releases should be compatible. Install torch and torchaudio together in the same install command for any release.

1 Like

Thank you for your advice.

Hlo, I am trying to install pytorch in my RTX 4090 GPU the problem is that I purged the cuda 12.3 version and installed the 11.7 version so that I can use pytorch >=1.8 but how much ever I try when I type nvidia-smi the same version is being shown the purge and reinstalling is unsuccessful . Actual the only thing I need is to install pytorch >= 1.8 can anyone tell me what I need to do

I don’t understand this workflow as the PyTorch binaries ship with their own CUDA runtime dependencies as explained in this topic previously. Your locally installed CUDA toolkit would be used if you build PyTorch from source or a custom CUDA extension. To execute PyTorch workloads you would need to install a supported NVIDIA driver.

I tried pytorch with cuda 12.1 support for windows both conda and pip installs.
After install pytorch returns cuda=true etc.
But when try the simple MNIST example, my GPU is ide, while the CPU is at 35% obviously the GPU is not utilezed by pytorch.
I do have installed cuda toolkit 12.1 + cudnn 8.9 which corresponds to 12.1 toolkit etc.
Where should be the problem then ?
Moreover as I read up pytorch comes with everything build in and I need only my gpu with the latest NV driver (v546.33) installed etc.

You would have to move the model and data to the GPU explicitly to use it.

Is there any examples of how to do it ?
I expected that PT does this automatically !
Do I have to take care about moving data to/out the GPU ?
Then what PT does ?
For instance TF does this automaticaly !
Or do you mean: “model = NeuralNetwork().to(device)” ?
Because this is from the exmaple (I havn’t changed any example code. I use it intact.) and I am doing it but still no GPU utilization !

Pick any neural network tutorial and it should show how data and parameters are moved to the right device.
Yes, PyTorch requires you to be explicit and move data manually as it does not do these movements behind your back by default.

OK. I checked already and with MNIST example (using .to method) it does not utilize the GPU !

If the data and the model was properly moved to the GPU and the .device attribute shows a cuda device, the GPU is used.

I don’t know.
I expect someopne expert in PT to explain me, I am fresh-new in PT !
I just took the basic example from here:
https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html#
And it does not utilize my GPU !
So I want to know why !
PT shows cuda is avaiilable etc.

As already explained, PyTorch will use the GPU if the data is properly moved to the device. I’ve pointed you towards the .device attribute you could double check to make sure the data is on the GPU. So far you claim PyTorch is not using the GPU without any evidence or more information besides the claim.

I did expect to run the example and see GPU utilization !
And this is what I did not see in contrast with TF !
What evidence ? I just run PT official example, what bigger evidence than this ???
It is PT example not mine !!!
Obviously to use PT with GPU waste time :slight_smile:
And the support can not give strightforward example of GPU utilization or PT was not designed for easy GPU ussage etc. I don’t want to waste my time deducing this etc.
Thank you !