I am trying to implement a “Sequence to sequence prediction model for time series” using either LSTM or GRU but could not find good tutorials for the exact problem.
However, I found this, which is implemented for a NLP problem (Machine Translation)
After giving this a thorough read once, I intuitively figured out these things below (Which I am not sure are correct) :
- For time series data (1 dimensional data which has been converted to chunks of 12 time steps using moving window of stride 1), we do not need a embedding layer
- We do not need tokens (, ) at the beginning and end of each sequence
- In the NLP data, they use vocab size to represent each word in 1xvocav_size dimensions however, we just need a single dimension for each time step
It would be great, If anyone could confirm these
Or if you have any notebook I can follow for sequence to sequence prediction of time series