Pytorch installed with CUDA 9 seems called it automatically


It seems that Pytorch installed with CUDA 9 automatically calls the latter. Indeed when I run a code where CUDA is explicitly called nowhere, in order to run my code without CUDA, it returns this error:

File "......../", line 156, in train output, hidden = model(_data, hidden)

File ".......//anaconda3/envs/deep-learning-env/lib/python3.6/site-packages/torch/nn/modules/", line 325, in __call__ result = self.forward(*input, **kwargs)

File "/.......//", line 182, in forward output, hidden = self.rnn(dropped_out_input, hidden)

File "......./anaconda3/envs/deep-learning-env/lib/python3.6/site-packages/torch/nn/modules/", line 325, in __call__ result = self.forward(*input, **kwargs)

File ".......//anaconda3/envs/deep-learning-env/lib/python3.6/site-packages/torch/nn/modules/", line 169, in forward output, hidden = func(input, self.all_weights, hx)

File "......./anaconda3/envs/deep-learning-env/lib/python3.6/site-packages/torch/nn/_functions/", line 385, in forward return func(input, *fargs, **fkwargs)

File ".......//anaconda3/envs/deep-learning-env/lib/python3.6/site-packages/torch/nn/_functions/", line 245, in forward nexth, output = func(input, hidden, weight)

File ".......//anaconda3/envs/deep-learning-env/lib/python3.6/site-packages/torch/nn/_functions/", line 85, in forward hy, output = inner(input, hidden[l], weight[l])

File ".......//anaconda3/envs/deep-learning-env/lib/python3.6/site-packages/torch/nn/_functions/", line 114, in forward hidden = inner(input[i], hidden, *weight)

File ".......//anaconda3/envs/deep-learning-env/lib/python3.6/site-packages/torch/nn/_functions/", line 32, in LSTMCell gates = F.linear(input, w_ih, b_ih) + F.linear(hx, w_hh, b_hh)

File ".......//anaconda3/envs/deep-learning-env/lib/python3.6/site-packages/torch/nn/", line 835, in linear return torch.addmm(bias, input, weight.t())

And especially:

RuntimeError: Expected object of type Variable[torch.cuda.FloatTensor] but found type Variable[torch.FloatTensor] for argument #1 ‘mat1’

When I run it with CUDA explicitly, I have no error.

Finally when I run it with Pytorch installed without CUDA, namely installed with this command:

conda install pytorch-cpu torchvision -c pytorch

and without using CUDA, I have no error too.

Thereby, is Pytorch call CUDA automatically when it is installed with it ?

If yes why, else why I have this error?

Thank you in advance for your answers.

Could you provide a script that shows pytorch calling CUDA without you including cuda() calls?