Building PyTorch from source using pip

A recent blog post has claimed that building PyTorch from source yields a considerable training time speed-up with AlexNet-like architectures (approx. 31% for CUDA and 39% for CPU). These are very impressive performance gains and they come essentially for free.

Looking at the installation guide, I noticed that things are pretty straight-forward if you’re willing to go with Anaconda. However, the environment on which I mostly work prohibits the use of Anaconda; we essentially do everything using pip-like package managers. Browsing around for PyTorch installation guides using pip (while always building from source), I mostly encountered some pages from 2017, before PyTorch introduced pip installation.

I am wondering if anyone can share any recent successful attempts of building PyTorch from source while avoiding Anaconda. My OS is Ubuntu 18.04 LTS.

1 Like


The main problem with the pip-based installation from source is the dependencies.
In particular, you want to make sure that cuda, cudnn, (magma if you need linear algebra on cuda), cpu blas (mkl or openblas), cpu lapack (comes with blas usually and is only needed if you need linear algebra) are available and properly detected.

Once these are installed, you can simply install it with python install (or, I’m not sure but I think it works, pip install .).