Assuming we have a Sequence-to-Sequence LSTM model for time-series prediction:
Input time-series: X shaped as (batch_size, seq_length = N, input_dim = 1)
Output time-series: y shaped as (batch_size, seq_length = N, input_dim = 1)
I want to predict time series of y using N-lagged X data. What is the correct order (for preprocessing) of the input data into the LSTM mode. In other words, in what direction are the data fed into LSTM models?
Should the input tensor/array, X, be ordered as:
Option 1:
input slice (X[0,:]) [x(t), x(t-1),…, x(t-1)]
output: [y(t), y(t-1),…, y(t-1)]
e.g. model( [x(t), x(t-1),…, x(t-1)])
or, Option 2:
input slice (X[0,:]) [x(t-N),…, x(t-1), x(t)]
output: [y(t-N),…, y(t-1), y(t)]
e.g., model([x(t-N),…, x(t-1), x(t)])
Basically, for my input data, does the sequence time-step go from (t) to (t-N) left to right, or vise vera?