[CUDA] How to manually install binaries

Heyho,
I would like to use my (not anymore supported) GPUs ‘Tesla K10.G1.8GB’ to train my model with Pytorch 0.4.0. The GPUs have a computing capability of 3.0. Does it work with Pytorch 0.4.0 when I compile the binaries found at https://pytorch.org/previous-versions/? Or do I need to downgrade Pytorch?

Thanks!

Raph

I’m not sure if it with the pytorch 0.4 binary out of the box. If it doesn’t, then it might be easier to install from source: https://github.com/pytorch/pytorch.

1 Like

Gonna try this. When I need help (which I probably, because I never installed anything from source :smiley:) I will come back at you. Thanks!

@richard so like promised: I am trying to install the thing from scratch in a conda enviroment: And i get following error:

cmake: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by cmake)
cmake: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by cmake)
cmake: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by cmake)
cmake: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by cmake)
Failed to run 'bash tools/build_pytorch_libs.sh --with-nnpack --with-mkldnn caffe2 nanopb libshm gloo THD'

cmake and gcc are both installed :confused:

What is your gcc version?

What GCC version is recommended?

I have GCC 4.9 in Ubuntu 14.04 with cuda 9.0.

GCC version is dependent on the Cuda toolkit version as well. You can go with the above combination mostly. Atleast I know it works.

I have GCC 7.3 with no CUDA.

I get GCC exit status 4