Architecture for LSTM autoencoder

Hi everybody ! I’m currently working on anomaly detection in time series,(1D signal anomaly).

To deal with it i want to build an LSTM autoencoder.

I’ve no trouble with the code but I CANT find an architecture (I found no paper).

Someone can advice me ?

Thanks you

There are multiple ways to do autoencoder with LSTM. At the high-level:

  1. Encoder: will output a state.
  2. Decoder: will use this state as the initial state, and will try to construct the input to the encoder.

The state from the encoder, can be:

  1. Final state.
  2. Average state for the last K timesteps.
  3. Weighted average --soft-attention.
  4. The output of a NN, which take all the encoder states and learn the best output (a more advance version of the soft-attention).

Also, because you are dealing with sequences, you can have multiple decoder to improve the presentation power of the encoder. Such as, decoder that construct current sequence, another decoder that predict the future…etc.

1 Like