Torch not compiled with CUDA enabled... again - SOLVED

Sorry to post this again, but after reading ALL of the available topics on this forum, I still can’t get my cuda to be recognized by pytorch. Does with every single little update the whole compatibility collapse? I don’t know anymore which torch version goes with which cuda with which cudnn with wich python version etc. This is madness.

Pl. try to post additional info (OS, GPU type, Python version, how are you installing, etc), so that someone who knows / who has gone thru similar things could possibly help you.
Without additional info, this post will not get any reply probably.

I solved it. So, for those who are using jupyter notebook: create a venv in anaconda (NOT miniconda) and then activate it from the anaconda navigator.

No, the compute capabilities weren’t changes in the last year with the exception that new Ampere devices need the CUDA11 runtime.

You could install the binaries using the provided commands from the website which mention the corresponding CUDA runtime version shipped in the binaries. Your local CUDA toolkit won’t be used unless you build PyTorch from source or custom CUDA extensions.

Good to hear you solved it, as the majority of these issues are caused by a broken environment.