I am beginner to this field. Currently i would like to setup environment for my GPU.
I am lost here and there.
- I try to install the driver 418 as I want cuda 10.1 which has support for pytorch. But I got Driver 450.66 instead. and when i type nvidia-smi, it show Cuda version 11, I search in google it said the cuda here is driver API compatibility. If I install another cuda version 10.1, will my environment is messed up?
When I install cuda, shall I also install the cudnn altogether?
However I found a folder for cudnn. “/usr/include/cudnn.h”
Since I got not a clean device, I am not sure what they have installed in the GPU. So I a little bit confuse if my environment is messed up. How can i know if all the configuration setup is correct for deep learning environment using pytorch?
If you have a new device, I would actually advise with using conda to install these things. You only need to install the nvidia driver (which you already have) and everything else will come in the form of conda package in a given environment. This means that if you mess things up, just remove that environment and create a new one and all is good again!
But once you have the nvidia drivers and conda, you can just follow the instructions from the website https://pytorch.org/get-started/locally/ to install with conda. And it will automatically install the right version of cuda for you (as well as all other required dependencies)!
Thank you for your advise.
So the step would be: driver --> Conda --> create environment --> install Pytorch?
Therefore every create a new environment we shall install pytorch with cuda?
Or shall I install pytorch with cuda first then create environment?
My GPU is in server, and has several users. Would that be wise if I install driver and conda in the root, then enable conda to all users?
Thank you so much
Especially for a shared server, I would recommend to have as few things in the global install as possible. That gives user freedom to install whichever variant they want. And prevent them from breaking the whole system if they do bad things.
In particular because installing cuda within conda is very simple and safe.
So I would suggest:
(for everyone): driver -> Conda
(for each user): create env -> install pytorch (that will pull cuda automatically)
Note that you can also make users install their own conda as well if you want to ensure more separation between users.
The benefit of the share conda is that packages will be re-used, reducing disk footprint.