Build PyTorch from source. Questions

I have GT 710. It has compute capability of 3.5. This means that if I want to use PyTorch with a GPU, I have to build PyTorch from source.
I have already made several attempts but unsuccessfully. I didn’t expect the build process to take hours, in addition CPU is 100% busy. I work on Windows and this makes the installation process even more complicated. I have a few questions and would like to have some answers before I proceed to the next attempt.
I use Unity mlagents toolkit. It requires pytorch >=1.6.0,<1.9.0. I chose the minimum configuration (pytorch 1.6, CUDA 9.2, cuDNN v7) to ease the build process.
Question 1: Is it justified to use the minimal configuration?
I found the pytorch 1.6 repository at but I’m not sure if this is correct.
Question 2: Where should I look for pytorch repositories of different versions?
I have Visual Studio 2019 16.11.7 and I can’t install toolset 14.11. The instructions say:
“There is no guarantee of the correct building with VC++ 2017 toolsets, others than version 15.4 v14.11.”
Does this apply to Studio 2019?
Question 3: Can I use Visual Studio 2019 to build?
I do not see how I can specify the desired compute capability. May be


Question 4: How do I specify the desired compute capability?
Let’s say I built the pytorch binaries successfully. Where are these binaries located? How do I install these binaries?
Question 5: How do I install new pytorch binaries?

  1. You can pick any PyTorch tag, which would support your setup (e.g. CUDA9.2). The currently required min. CUDA toolkit version is 10.2, so you should double check if PyTorch 1.6.0 supports 9.2. Yes, the release/1.6 branch is correct. You could alternatively use git checkout v1.6.0.

  2. The different release versions are all tagged in the GitHub repository. I don’t know which VisuaStudio versions are required as I’m not using Windows.

  3. See 2.

  4. Using TORCH_CUDA_ARCH_LIST is correct. The source build will be installed into your current environment if you use python install and will be usable there. The installed package will be located in your env also. E.g. if you are using conda to manage your environments, you would find the package in e.g. /opt/conda/lib/python3.8/site-packages/torch if you install it in the base environment.

  5. See 4.

Thanks for the answer, @ptrblck. Everything is clear.

I have built pytorch with cc 3.5 and this works for me. Then I built a whl file “torch-1.6.0a0+ 31f58d-cp37-cp37m-win_amd64.whl”, but it does not contain CUDA.

How to build a .whl like the official one?

I followed these steps:
First I installed Visual Studio 2017 with the toolset 14.11.
Then I installed CUDA 9.2 and cuDNN v7.
Clone PyTorch Source:

git clone --branch release/1.6 pytorch-1.6
cd pytorch-1.6
git submodule sync
git submodule update --init --recursive

In Anaconda Prompt:

conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi

set CMAKE_GENERATOR=Visual Studio 15 2017

for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,16^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%


python install

After 5 hours, the build process was completed successfully. Then I built whl:

python bdist_wheel

I got whl in a few minutes. I installed this wheel in a separate conda environment and tested it.

(test-whl) E:\PyTorchSource>pip install pytorch-1.6\dist\torch-1.6.0a0+b31f58d-cp37-cp37m-win_amd64.whl

(test-whl) E:\PyTorchSource>conda list
# packages in environment at C:\Users\qwego\Anaconda3\envs\test-whl:
# Name                    Version                   Build  Channel
ca-certificates           2021.10.26           haa95532_2
certifi                   2021.10.8        py37haa95532_0
future                    0.18.2                   pypi_0    pypi
numpy                     1.21.4                   pypi_0    pypi
openssl                   1.1.1l               h2bbff1b_0
pip                       21.2.4           py37haa95532_0
python                    3.7.11               h6244533_0
setuptools                58.0.4           py37haa95532_0
sqlite                    3.36.0               h2bbff1b_0
torch                     1.6.0a0+b31f58d          pypi_0    pypi
vc                        14.2                 h21ff451_1
vs2015_runtime            14.27.29016          h5e58377_2
wheel                     0.37.0             pyhd3eb1b0_1
wincertstore              0.2              py37haa95532_2

(test-whl) E:\PyTorchSource>python
Python 3.7.11 (default, Jul 27 2021, 09:42:29) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.cuda.get_arch_list())
>>> torch.tensor([1.0, 2.0]).cuda()
tensor([1., 2.], device='cuda:0')

This works on my computer, but I would like to get an independent whl.

I haven’t built a wheel for Windows (Linux only [with CUDA support]), but would try to reuse the build steps used for the official wheels found e.g. here for Windows.

wheel is just a zip.
I think you could unzip the torch-1.6.0a0+b31f58d-cp37-cp37m-win_amd64.whl and copy some binaries as builder/copy.bat at main · pytorch/builder (
Then, zip it again.

Note: keep the rezipped package name as torch-1.6.0a0+b31f58d-cp37-cp37m-win_amd64.whl.

I already thought about something similar. I thought to copy the files to the build folder, then create the wheel: python bdist_wheel. But I’m not sure if this is enough.

I added missing files to my build and built the wheel. torch-1.6.0a0+b31f58d-cp37-cp37m-win_amd64.whl
I have been tested this file on my computer in another Windows (Windows 7).