are there some recent and in-depth (step-by-step) guides on build and compiling Pytorch on Ubuntu?
I’have followed GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration but perhaps it requires some knowledge that I don’t known (I’m “middle” user of Ubuntu… I’m never compiled anything!).
Or it exists some general guides but with very similar passages?
I have done a lot of proofs of pytorch compiling but I have received many errors.
For example it said me to have installed pytorch1.9 (why not 1.8?) in conda evniroment where I have builded it (and I make also “conda list” to check version), but when I make print pytorch version it writes pytorch 1.5…
And this version works, but it gives smaller accuracy (-20%) of pytorch 1.8.
I don’t understand why…
A source build would currently show
1.9.0a+commit, since it’s the version after the
1.8.0 release and is moving towards
1.9, so that’s expected.
If your environment shows
1.5, then you have most likely installed multiple PyTorch versions in the same environment (via conda, pip, or a source build) and would need to either remove the others or create a new environment and build PyTorch from source there.
First, Thanks for your response (I always have used your tips and hints on this forum in the past)
I’m sure that it’s happened as you said, but I don’t know how it’s possible… anyway I will try to format and redo from beginning.
In any case, which is the best walkthrough could you suggest to help me with my build_and_compile problem?
Do you know some guide more detailed?
I really would to learn it
I would start with a new and clean conda virtual environment, clone the repository and update all submodules:
git clone --recursive https://github.com/pytorch/pytorch
and install it using the already posted build instructions.
If you see any issues with the build, just post an update here (with the error message from the logs) and we can check what’s wrong.
In case you have some trouble with finding some CUDA libs during the build process, you could also try to use a docker container with an installed CUDA toolkit first and check the installation inside it.