I’m trying to install Pytorch version >= 1.6 with CUDA 10.0 using anaconda environment. However, I’m getting conflicts and not able to install it.
To minimize conflict, now I’m creating a new environment with only numpy, python, pytorch and cudatoolkit. Even says that conflict and process hang at examining conflicts.
Last time I had a workaround to this problem (see here) using pytorch 1.3. However, this time I strictly need pytorch >= 1.6.
My remote system doesn’t support any other CUDA version and I can’t modify whats already in remote system.
Note that your system doesn’t need a local CUDA installation, if you just want to use PyTorch, since the conda binaries and pip wheels whip with their own CUDA runtimes. You would need a local CUDA installation, if you want to build PyTorch from source or build any custom CUDA extension.
Your workstation would thus only need a sufficiently new NVIDIA driver for the used CUDA runtime.
The binaries ship now with CUDA9.2, 10.1, 10.2, and 11.0, so I’m unsure where you can find the 10.0 version.
So is it possible I only install pytorch without CUDA runtime and let the pytorch use cuda library present in my system? My system has CUDA 10.0 already installed.
Does it mean I should not use conda virtual environment? and do local installation of required packages?
Yes, that’s possible if you build from source as described here.
No, you can still use virtual environments (and I would also recommend to do so).