Multiple anaconda environments for different Pytorch versions

I have installed CUDA 11.3.1 with cudnn 8.2.1 in my Windows 10 desktop PC.
I have an anaconda environment with pytorch 1.12.1, CUDAtoolkit 11.3.1, cudnn 8.2.1, python 3.7.
I want to create another anaconda environment in order to install pytorch 1.4.0 (with python 3.6).

According to Pytorch website, pytorch 1.4.0 works with CUDA 10.1.
So, I need to install CUDA 10.1 in my Windows desktop (in addition to the exsing CUDA 11.3.1).

Under this circumstance, I have the following questions:

  1. After installing another CUDA in my desktop, do I need to explicitely specify the system variables for this newly installed CUDA? (I found that there are two system variables for the existing CUDA 11.3.1: “CUDA_PATH” and “CUDA_PATH_V10_13”).

  2. Suppose I created a new anaconda environment with python 3.6.
    Then, in order to install pytorch 1.4.0, do I need to specify cudatoolkit version in the command?
    or Pytorch will automatically grab the appropriate CUDA (i.e., CUDA 10.1) in my desktop PC?

    i.e.,
    conda install pytorch==1.4.0 torchvision==0.5.0 torchaudio cudatoolkit==10.1 -c pytorch
    vs.
    conda install pytorch==1.4.0 torchvision==0.5.0 torchaudio -c pytorch

You don’t need to install a local CUDA toolkit to execute the PyTorch binaries as they ship with their own CUDA runtime, cuDNN, NCCL etc.

Yes, you should use the full install command including cudatoolkit if given.

I missed the step in my anaconda to install an old cudatoolkit with a specific version that is specified in pytorch installation command (in my case cudatookit==10.1). Successfully installed the old version pytorch after installing this cudatookit in my anaconda. Thanks.

BTW, whole mess came up due to “next(self.parameters()).device” in my old pytorch script in which is no longer supported in the recent pytorch.

I’m unsure what “mess” you mean as the code still works in the current nightly release:

model = models.resnet101()
model.cuda()

next(model.parameters()).device
# device(type='cuda', index=0)

Current nightly release in your reply means Pytorch 1.13.1??
(Some of my packages do not support Python 3.9 or beyond).

With Pytorch (1.12.1) / Python (3.7.16), I got the following StopIteration error (No error for Pytorch 1.4.0 with Python 3.6.9).

Traceback (most recent call last):
File “synthesizer_train_kor_tv.py”, line 53, in
train(**vars(args))
File “C:\East\dreambyte\Deepfake\pyworks\RTVC_Korean\synthesizer\train_kor_tv.py”, line 321, in train
data_parallel_workaround(model, texts, mels, embeds, val_cycle) # ./synthesizer/utils/init.py
File “C:\East\dreambyte\Deepfake\pyworks\RTVC_Korean\synthesizer\utils_init_.py”, line 17, in data_parallel_workaround
outputs = torch.nn.parallel.parallel_apply(replicas, inputs)
File “C:\Users\East.conda\envs\ml_gpu\lib\site-packages\torch\nn\parallel\parallel_apply.py”, line 86, in parallel_apply
output.reraise()
File “C:\Users\East.conda\envs\ml_gpu\lib\site-packages\torch_utils.py”, line 461, in reraise
raise exception
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File “C:\Users\East.conda\envs\ml_gpu\lib\site-packages\torch\nn\parallel\parallel_apply.py”, line 61, in _worker
output = module(*input, **kwargs)
File “C:\Users\East.conda\envs\ml_gpu\lib\site-packages\torch\nn\modules\module.py”, line 1130, in _call_impl
return forward_call(*input, **kwargs)
File “C:\East\dreambyte\Deepfake\pyworks\RTVC_Korean\synthesizer\models\tacotron.py”, line 491, in forward
device = next(self.parameters()).device # use same device as parameters
StopIteration