What's the difference between command installation and source code installation

I’m new to pytorch. Yesterday, I installed pytorch on our server since source code. Before installation, I have to solve the problem of cuda version and cudnn version. Previously, our server’s cuda version is 8.0, while cudnn version is 5.1. So that the latest pytorch cannot be installed successfully, it needs cudnn version to be above than 6.0. However, some of my classmates installed pytotch on the same machine(cuda 8.0 and cudnn 5.0) with command pip successfully. From the command It seems that pip command does not pay attention to cuda and cudnn version. I don’t know whether can we install pytorch successfully through conda command line.
So can anyone tell me what the difference between command installation and source code installation? Does command line just pay attention to cuda version and ignore cudnn version? Thank you.

The conda and pip binaries come with built-in libs. See this answer from another thread:

Usually you don’t need to build it from source.
The reasons why you would want to build it yourself are:

  • if you would like to develop a new PyTorch feature and thus need to work on the source code
  • if you need a new PyTorch feature or bug fix which has not been integrated into the binaries yet.

Thank you for your reply.

I saw this reply, and I tried installing pytorch with command, it will download cudnn library(7.1). Does it mean pytorch has nothing to do with system cudnn version, and run its own cudnn?

If you build it from source, the script will try to find your CuDNN installation.
You can see some information in the terminal while building, e.g. which libs were found and which are missing.

I’ve heard of users getting a large speed-up when building from source compared to using a pre-built binary. Is that expected?

It depends on the local installation of various libs.
E.g. the binaries ship with CUDA, cudnn, mkl-dnn etc.
If you don’t have cudnn or mkl locally installed, you would see a slowdown.
However, you the latest versions of these libs might give you a speedup, so you could compare the binaries against your build from source.

can I install pytorch through source code that support cuda binaries like we do with pip install?

Yes and you can find the build scripts in the pytorch/builder repository.

I tried to create build wheel file by putting this env variable
ENV PYTORCH_BUILD_VERSION=1.8.1
ENV PYTORCH_BUILD_NUMBER=1
ENV CMAKE_PREFIX_PATH=“$(dirname $(which conda))/…/”
ENV TORCH_NVCC_FLAGS=“-Xfatbin -compress-all”
ENV TH_BINARY_BUILD=1
ENV USE_STATIC_CUDNN=1
ENV USE_STATIC_NCCL=1
#ENV ATEN_STATIC_CUDA=1
ENV USE_CUDA_STATIC_LINK=1
ENV NCCL_ROOT_DIR=/usr/local/cuda
ENV TORCH_CUDA_ARCH_LIST=“7.5”
ENV INSTALL_TEST=0

after building this file I try to install this on another system which doesn’t have CUDA installed but have GPU driver support , it is throwing me some error after I try to import torch that some “xyz file is not readable as it does not exist” then I tried to find out which file is missing , that “xyz file” is part of CUDA toolkit , why does it required CUDA after we have set this env USE_CUDA_STATIC_LINK=1 during wheel creation ?