I have a task where I wish to ML to model an output such that given a new set of time-independent parameters, I can predict this output over a given period of time.
For example, y = f(t, a, b) where a, b and c are time-independent variables
Let’s assume you have the following data:
a = 300, b = 30
y(t) = [1, 2, 3, 4, 5, 6]
a = 330, b = 36
y(t) = [1, 3, 4, 5, 7, 8]
In formulating my solution, I combined y(t) to form:
data = [1, 2, 3, 4, 5, 6, 1, 3, 4, 5, 7, 8]
a = [300, 330]
b = [30, 36]
Since LSTM is the most commonly used for time-series data, I created a sequence of length 3 as follows:
X_time = [[1, 2, 3], [2, 3, 4], [3, 4, 5], [1, 3, 4], [3, 4, 5], [4, 5, 7]]
X_non_time = [[300, 30], [300, 30], [300, 30], [330, 36], [330, 36], [330, 36]]
y = [[4], [5], [6], [5], [7], [8]]
class LSTM(nn.Module):
def __init__(self, lstm_input_size, fc_input_size, hidden_size, num_layers, output_size):
super(LSTM, self).__init__()
self.lstm = nn.LSTM(lstm_input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size+fc_input_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, output_size)
def forward(self, xtime, xnon_time):
out, _ = self.lstm(xtime)
new_input = torch.cat((out[:, -1, :], xnon_time), dim=1)
x = F.relu(self.fc(new_input)) # Take the output from the last time step
out = self.fc2(x)
return out
For the actual problem, i created a problem with the following information:
Shape of time dependent sequence (xtime): torch.Size([1764, 3, 1])
Shape of time independent sequence (xnon_time): torch.Size([1764, 8])
Shape of labels (output): torch.Size([1764, 1])
lstm_input_size = 1 # Number of time-dependent features
fc_input_size = 8 # Number of time-independent features
output_size = 1 # Number of output
hidden_size = 50
num_layers = 5
output_size = 1
lstm_model = LSTM(lstm_input_size, fc_input_size, hidden_size, num_layers, output_size).to(device)
Please advise on whether this is right or not.