K40 is not supported by PyTorch

The latest Pytorch seems not support K40 gpu card anymore, which is due to release build is not compiled with compute mode 3.5. I understand build from source could be one solution. However it is very annoyed in many ways. Build from source will certainly close the door to many new users. If the binary size is the only reason to let the cm 3.5 be removed. Can we please add it back.

1 Like

I get a not very helpful message:

Tesla K40m with CUDA capability sm_35 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
If you want to use the Tesla K40m GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

I went to the page but it doesn’t tell me which version of pytorch to install or anything like that. Can we get more precise instructions what exactly we need to down grade?

Any help? @ptrblck can you ping the person that might know? Sorry to bug you directly but I had a hunch you might know.

Thanks!

You would have to build PyTorch from source, as this compute capability is not shipped in the binaries anymore since PyTorch 1.5, if I’m not mistaken.
Contributions to improve these error messages are always welcome.

ok got it :’( thanks regardless.

Hello,

Just to know if anyone was succeeded in building PyTorch (from versions 1.6 to 1.9) from source to work with K40 GPUs. We are trying doing this with gcc version 9.0 and the compilation goes on up to a certain point where an error says that numa.h is missing.

Does anyone has experienced this?

Thank you.