Hello everyone,
I did some research but I couldn’t find any solutions at the moment.
I am trying to make categorical prediction of a time series dataset. I have a train dataset with the follow size:
torch.Size([3749, 1, 62]): No. of samples, windows of 1 day, 62 features
labels:
torch.Size([3749]) with category 0,1,2
This is my model:
class LSTM(nn.Module):
def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
super(LSTM, self).__init__()
self.hidden_dim = hidden_dim
self.num_layers = num_layers
self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim)
out, (hn, cn) = self.lstm(x, (h0, c0))
out = self.fc(out)
out = F.softmax(out,dim=1)
return out
This is the train loop:
input_dim = 62
hidden_dim = 80
num_layers = 1
output_dim = 1
num_epochs = 10
model = LSTM(input_dim=input_dim, hidden_dim=hidden_dim, output_dim=output_dim, num_layers=num_layers)
criterion = torch.nn.MSELoss()
optimiser = torch.optim.SGD(model.parameters(), lr=0.01)
hist = np.zeros(10)
lstm = []
for t in range(num_epochs):
optimiser.zero_grad()
y_train_pred = model(X_train)
loss = criterion(y_train_pred, y_train.reshape(y_train.shape[0],1,1))
print("Epoch ", t, "MSE: ", loss.item())
hist[t] = loss.item()
loss.backward()
optimiser.step()
This is the output:
Epoch 0 MSE: 0.23072819411754608
Epoch 1 MSE: 0.23072819411754608
Epoch 2 MSE: 0.23072819411754608
Epoch 3 MSE: 0.23072819411754608
Epoch 4 MSE: 0.23072819411754608
Epoch 5 MSE: 0.23072819411754608
Epoch 6 MSE: 0.23072819411754608
Epoch 7 MSE: 0.23072819411754608
Epoch 8 MSE: 0.23072819411754608
Epoch 9 MSE: 0.23072819411754608
As you can see the model does not learn and I happen the same even with more epochs.
Do you have any ideas?
Thanks