I have GT 710. It has compute capability of 3.5. This means that if I want to use PyTorch with a GPU, I have to build PyTorch from source.
I have already made several attempts but unsuccessfully. I didn’t expect the build process to take hours, in addition CPU is 100% busy. I work on Windows and this makes the installation process even more complicated. I have a few questions and would like to have some answers before I proceed to the next attempt.
I use Unity mlagents toolkit. It requires pytorch >=1.6.0,<1.9.0. I chose the minimum configuration (pytorch 1.6, CUDA 9.2, cuDNN v7) to ease the build process.
Question 1: Is it justified to use the minimal configuration?
I found the pytorch 1.6 repository at https://github.com/pytorch/pytorch/tree/release/1.6 but I’m not sure if this is correct.
Question 2: Where should I look for pytorch repositories of different versions?
I have Visual Studio 2019 16.11.7 and I can’t install toolset 14.11. The instructions say:
“There is no guarantee of the correct building with VC++ 2017 toolsets, others than version 15.4 v14.11.”
Does this apply to Studio 2019?
Question 3: Can I use Visual Studio 2019 to build?
I do not see how I can specify the desired compute capability. May be
set TORCH_CUDA_ARCH_LIST=“3.5”
Question 4: How do I specify the desired compute capability?
Let’s say I built the pytorch binaries successfully. Where are these binaries located? How do I install these binaries?
Question 5: How do I install new pytorch binaries?
You can pick any PyTorch tag, which would support your setup (e.g. CUDA9.2). The currently required min. CUDA toolkit version is 10.2, so you should double check if PyTorch 1.6.0 supports 9.2. Yes, the release/1.6 branch is correct. You could alternatively use git checkout v1.6.0.
The different release versions are all tagged in the GitHub repository. I don’t know which VisuaStudio versions are required as I’m not using Windows.
See 2.
Using TORCH_CUDA_ARCH_LIST is correct. The source build will be installed into your current environment if you use python setup.py install and will be usable there. The installed package will be located in your env also. E.g. if you are using conda to manage your environments, you would find the package in e.g. /opt/conda/lib/python3.8/site-packages/torch if you install it in the base environment.
I have built pytorch with cc 3.5 and this works for me. Then I built a whl file “torch-1.6.0a0+ 31f58d-cp37-cp37m-win_amd64.whl”, but it does not contain CUDA.
How to build a .whl like the official one?
I followed these steps:
First I installed Visual Studio 2017 with the toolset 14.11.
Then I installed CUDA 9.2 and cuDNN v7.
Clone PyTorch Source:
I haven’t built a wheel for Windows (Linux only [with CUDA support]), but would try to reuse the build steps used for the official wheels found e.g. here for Windows.
I already thought about something similar. I thought to copy the files to the build folder, then create the wheel: python setup.py bdist_wheel. But I’m not sure if this is enough.