Alright. So I have a training set currently in a numpy array of dimension [50573, 322] for the X values and [50573, 126] for Y. To preprocess my data I am calling…
y_train = y_train.astype(float)
y_test = y_test.astype(float)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
X_train = np.reshape(X_train, (len(X_train), end-1, 1))
X_test = np.reshape(X_test, (len(X_test), end-1, 1))
X_train = Variable(torch.Tensor(torch.from_numpy(X_train).float()))
y_train = Variable(torch.Tensor(torch.from_numpy(y_train).float()))
For my model I am calling…
class LSTMNET(nn.Module):
def __init__(self, input_dim, hidden_dim, batch_size, output_dim=1,
num_layers=2):
super(LSTMNET, self).__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.batch_size = batch_size
self.num_layers = num_layers
# Define the LSTM layer
self.lstm = nn.LSTM(self.input_dim, self.hidden_dim, self.num_layers)
# Define the output layer
self.linear = nn.Linear(self.hidden_dim, output_dim)
def init_hidden(self):
# This is what we'll initialise our hidden state as
return (torch.zeros(self.num_layers, self.batch_size, self.hidden_dim),
torch.zeros(self.num_layers, self.batch_size, self.hidden_dim))
def forward(self, input):
input = torch.Tensor(input)
# Forward pass through LSTM layer
# shape of lstm_out: [input_size, batch_size, hidden_dim]
# shape of self.hidden: (a, b), where a and b both
# have shape (num_layers, batch_size, hidden_dim).
print(input.shape)
lstm_out, self.hidden = self.lstm(input.view(len(input), self.batch_size, -1))
# Only take the output from the final timetep
# Can pass on the entirety of lstm_out to the next layer if it is a seq2seq prediction
y_pred = self.linear(lstm_out[-1].view(self.batch_size, -1))
return y_pred.view(-1)
Then…
lstm = LSTMNET(1, 1, batch_size=batch_size, output_dim=126, num_layers=1)
All I want to do is train this. But something simple like…
predicted = lstm(X_train)
Filled with errors. Can’t figure out how this is supposed to work. Please advise.