There are pre-compiled PyTorch packages for different versions of Python, pip or conda, and different versions of CUDA or CPU-only on the web site.
Is it true that PyTorch does not need any CUDA or cuDNN or other library installed on the target system, only the proper NVIDIA driver?
What then, is the effect of choosing one of the different CUDA versions for download? Since PyTorch comes with its own CUDA library, will all the packages with different CUDA versions compiled in work on my system?
Also, if my system does not have a supported GPU, will installing Pytorch with CUDA still work and be usable as long as I do not try to actually use .cuda() anywhere?
I think you have to install cuda separately. The cuda version selection is to make sure that you install a pytorch build that is compatible with the cuda version that you have.
If you never use .cuda() nor torch.cuda.FloatTensor and friends, then everything should run fine on the CPU.
But I think I just installed pytorch using conda without installing cuda, but maybe if I use conda, cuda gets somehow installed as a dependency?
I think I have seen the statement in other threads on here, that pytorch does not need anything installed except the driver as a prerequesite, but maybe I misunderstood.
If you install PyTorch via the binaries (e.g., pip wheels or conda), it already comes with CUDA and cuDNN pre-packaged. The only case where you need to install CUDA & cuDNN yourself is when you are compiling it from source.
Sorry, but I still do not get it: when I install from binaries, without installing CUDA and cuDNN myself, I still can choose different CUDA versions from the download screen. For example for conda and python 3.6 on Linux it is the choice between these:
So for each cuda version or none i get a different binary. But why would I want to e.g. choose the cuda 8.0
over the cuda 9.0 version there? And if it always works, even on a CPU if I just make sure not to use cuda
in that case, could I simply always install the version with most recent cuda (9.1 currently) and be happy?
Originally I thought I have to choose the cuda version I have installed on my system, but since this is not the case, why do I have to choose at all?
Might be useful if you have an older card that doesn’t support CUDA 9.0 via its drivers, yet.
could I simply always install the version with most recent cuda (9.1 currently) and be happy?
If you never use cuda, I would just install the CPU version, because it’s smaller. Otherwise, just install CUDA 9.1. By default, PyTorch runs always on CPU. If you have the CUDA version, and a supported graphics card, you would e.g., call model.cuda() in your code to enable training using CUDA. Personally, I add the following to my scripts
if torch.cuda.is_available():
model = model.cuda()
And for the training Variables:
if torch.cuda.is_available():
x, y = x.cuda(), y.cuda()
Originally I thought I have to choose the cuda version I have installed on my system, but since this is not the case, why do I have to choose at all?
You don’t have to choose your system’s CUDA version; it’s only used if you install PyTorch from source. The reason why you want to choose different CUDA versions for the binaries is e.g., for graphics card compatibility
No, if you don’t install PyTorch from source then you don’t need to install the drivers separately. I.e., if you install PyTorch via the pip or conda installers, then the CUDA/cuDNN files required by PyTorch come with it already.
@rasbt do you know if there is any possibility to conda install torchvision -c pytorch without having to install CUDA Toolkit? This is my output when I try to install torchvision (ignore the $USERPATH):
Collecting package metadata: done
Solving environment: done
## Package Plan ##
environment location: /$USERPATH/anaconda3/envs/maskrcnn_benchmark
added / updated specs:
- torchvision
The following packages will be downloaded:
package | build
---------------------------|-----------------
pytorch-1.0.1 |py3.7_cuda10.0.130_cudnn7.4.2_2 375.4 MB pytorch
------------------------------------------------------------
Total: 375.4 MB
The following NEW packages will be INSTALLED:
cudatoolkit pkgs/main/linux-64::cudatoolkit-10.0.130-0
-- freetype pkgs/main/linux-64::freetype-2.9.1-h8a8886c_1
-- jpeg pkgs/main/linux-64::jpeg-9b-h024ee3a_2
-- libpng pkgs/main/linux-64::libpng-1.6.36-hbc83047_0
-- libtiff pkgs/main/linux-64::libtiff-4.0.10-h2733197_2
-- olefile pkgs/main/linux-64::olefile-0.46-py37_0
-- pillow pkgs/main/linux-64::pillow-6.0.0-py37h34e0f95_0
pytorch pytorch/linux-64::pytorch-1.0.1-py3.7_cuda10.0.130_cudnn7.4.2_2
torchvision pytorch/noarch::torchvision-0.2.2-py_3
-- zstd pkgs/main/linux-64::zstd-1.3.7-h0b5b093_0
Since I will be doing inference on CPU, I installed: conda install pytorch-nightly-cpu -c pytorch, which is required by the repository.
Do you know why do I have to install the packages I highlighted? I mean, I don’t need right now neither cudatoolkit (inference on CPU) nor pytorch (which I have already installed).
Is there any possibility to only install torchvision without those two packages? Install it via pip maybe?, but I don’t know if it would be conflicting with the conda pytorch channel installation of Pytorch (CPU).
I guess CUDA Toolkit won’t be installed on the conda environment, but system-wide. There are plans to later add a GPU to the station, but I don’t want to install anything CUDA-related until the card is installed.
Reading your answer gave me an idea to search if there was a torchvision package for CPU since this one it seems like it was for GPU. And I found it!
You only have to do: conda install -c pytorch torchvision-cpu
And it comes with:
The following packages will be downloaded:
package | build
---------------------------|-----------------
pytorch-cpu-1.0.1 | py3.7_cpu_2 26.8 MB pytorch
torchvision-cpu-0.2.2 | py_3 44 KB pytorch
------------------------------------------------------------
Total: 26.9 MB
The only drawback is that it comes with an earlier version of Pytorch than the specified on repository I am working, but I don’t think it would be that much different.
So if anybody has this exact question in the future here is an answer, I hope it helps!