I am unsuccessfully trying to install pytorch with GPU on WSL. Plenty of good tutorials and instructions to install specific versions of CUDA and Tensorflow, but these are always out of date once someone release a new version. I can’t find any good documentation on what versions of Tensorflow are supported or required for pytorch.
My install currently has ‘torch.version’ returning ‘2.1.2+cu121’. I assume this means it is using pytorch 2.1.2. But I don’t know if this means it thinks it is using Cuda 12.1, or it requires 12.1 or something else.
Do I need to downgrade to CUDA 12.1 to use pytorch 2.1.2? If so, which version of Tensorflow should I be on?
PyTorch binaries ship with their own CUDA dependencies, in your case using CUDA 12.1U1. You would only need to install an NVIDIA driver allowing PyTorch to communicate with your GPU. I don’t know what TensorFlow requires.
Thanks! But please explain further. I thought pytorch was just a library that called TensorFlow, which had a dependency on CUDA, which calls NVidia’s driver.
Are you saying I should be able to use pytorch with GPU acceleration WITHOUT either installing TensorFlow or CUDA seperately?
PyTorch does not depend on TensorFlow and does not call it.
Yes, you can install the PyTorch binaries, an NVIDIA driver, and can directly run PyTorch scripts on the GPU.
Thanks! Wow, I never realized this and haven’t really used it much until now. I thought it was just a different syntax flavor of Keras.
Much much easier to install, although so far, harder to know if it installed right.