When trying to create a module, get CUDA device errors when devices are available

stack trace

self.lstm = nn.LSTM(input_size = n_features, hidden_size=hidden_size, batch_first=True)
File “/pytorch4/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 409, in init
super(LSTM, self).init(‘LSTM’, *args, **kwargs)

File “/pytorch4/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 52, in init
w_ih = Parameter(torch.Tensor(gate_size, layer_input_size))
RuntimeError: CUDA error (10): invalid device ordinal

I’m confused as to how CUDA is associated even though I have not issued any CUDA commands. I am merely calling an LSTM init?

Here is some cuda output from my system

import torch
torch.cuda.current_device()
0
torch.cuda.device(0)
<torch.cuda.device object at 0x2afdfbe226a0>
torch.cuda.device_count()
1
torch.cuda.get_device_name(0)
‘Tesla K40m’

Which version of pytorch are you using?

I am not getting the error in 0.4.1.