Should I install the extra cudatoolkit and cudnn?

I install the latest pytorch from the official site with the command “conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia”. When I run the code “torch.cuda.is_available()”, the output is True. Does it mean that I don’t have to install the cudatoolkit and cudnn if I wanna run my model on GPU ? My computer is brand new and I don’t install the cudatoolkit and cudnn additionally.

1 Like

The PyTorch binaries ship with all CUDA runtime dependencies and you don’t need to locally install a CUDA toolkit or cuDNN. Only a properly installed NVIDIA driver is needed to execute PyTorch workloads on the GPU.

Seriously? I know I should trust what you say because you are an expert, but your opinion contradicts other Google search results and other posts here on the forum.

1 Like

You don’t need to trust me of course and can stick to other people’s opinions. If you check my posts here I’ve verified multiple times that no CUDA toolkit is needed and the binaries will work fine with their own shipped dependencies e.g. by installing them in a clean Ubuntu docker container. I’m also updating the CUDA dependencies in the binaries in the pytorch/builder status and should thus know how dependencies are used.