Timeseries Model Structure

Hello everyone,

the question is more about deep learning, than about pytorch specifically.
What is the best way to build a many-to-many timeseries model for numerical sequences of constant length p.e. Vehicle Trajectory?

If I take the last timestep of the encoder, like this…
image

…what would be the best way to generate a sequence of multiple timesteps, using the last hidden state as the new input? I’ve seen differen versions like this one…
image
and this one …
image
Or a combination of the two, where you concat the last timestep of the encoder ouptut with every timestep of the decoder output…
image
Feel free to suggest a different / better way.

Also, what is a better choice for labels?

  1. The absolute future values
  2. The change per time of the future values → integral of the model output to get prediction

I’m working on all of these variations right now. I was just curious if there is already a “go to” solution for my problem.

Thanks in advance,
Arthur

First decoder has weaker (bayesian) prior, i.e. potentially discardable starting context as sequence length grows. For models in later pictures, that’s achievable (much easier in gated rnns), but has to be learned. From other perspective, they have a shortcut connection to time zero, that may be beneficial if initial context is strongly informative.

Re: integral. I think you may face some issues with gradient flow, if you go that route and do things like cumsum(). Look into neural ODEs if you feel that learning changes is more suitable for your task, but they’re more complex and slower.

1 Like