Sorry I’m very new to deep learning and PyTorch. I’m looking into some codes about LSTM with pytorch.
Why does he use Variable in
dataX = Variable(torch.Tensor(np.array(x)))
dataY = Variable(torch.Tensor(np.array(y)))
trainX = Variable(torch.Tensor(np.array(x[0:train_size])))
trainY = Variable(torch.Tensor(np.array(y[0:train_size])))
testX = Variable(torch.Tensor(np.array(x[train_size:len(x)])))
testY = Variable(torch.Tensor(np.array(y[train_size:len(y)])))
and
def forward(self, x):
h_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size))
c_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size))
According to this question, python - How to load a list of numpy arrays to pytorch dataset loader? - Stack Overflow, you should use TensorDataset
to convert a list of 2d arrays into pytorch inputs, why does he use Variable here?
Besides, why does he use Variable in forward function? Is torch.zeros not enough?
The second question is that, when I try to run this code, I got this error, ‘RuntimeError: Input and parameter tensors are not at the same device, found input tensor at cuda:0 and parameter tensor at cpu’. How should I fix this? Which lines should I add .to(device)
command?
I added .cuda() as follows:
dataX = Variable(torch.Tensor(np.array(x))).cuda()
dataY = Variable(torch.Tensor(np.array(y))).cuda()
trainX = Variable(torch.Tensor(np.array(x[0:train_size]))).cuda()
trainY = Variable(torch.Tensor(np.array(y[0:train_size]))).cuda()
testX = Variable(torch.Tensor(np.array(x[train_size:len(x)]))).cuda()
testY = Variable(torch.Tensor(np.array(y[train_size:len(y)]))).cuda()
and
def forward(self, x):
h_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size)).cuda()
c_0 = Variable(torch.zeros(
self.num_layers, x.size(0), self.hidden_size)).cuda()
but still got this error.