GRU Time Series Autoencoder

It’s not quite clear what your asking. What is not working? Do you get errors? Does the loss not go down? Are you generally not happy with the accuracy? Is the accuracy worse compared to the Keras model?

When I build my autoencoders, I usually start with the most basic setup, see if it works (no errors, loss goes down, able to overtrain it in a small dataset, etc), and then step by step add complexity to the model and check again each time if still works. Getting no errors is usually the easy part, but that doesn’t mean it’s correct.

I’m not saying the model is wrong, but it’s definitely not the classic RNN-based encoder-decoder model. There the encoder, well, encodes your sequence to some latent representation (typically the last hidden state) which is that the “seed” hidden state for the decoder. The decoder the step by step generates the next output item and next hidden state using the current hidden state. In code that usually involves some loop.

In your code, you copy/repeat the last hidden state (I ignore the linear layer for simplicity) and give that sequence to your decoder GRU. This can’t make sense, since that sequence has the same items at each time step.

You may want to look at my code for an autoencoder and variational autoencoder (VAE). The context is text (NLP), but that doesn’t matter. I essentially started with the basic machine translaten / seq2seq model, only that input sentence and output sentence are the same. And then I just tweaked some stuff. They both train fine, with the VAE inherently much more difficult to train.

2 Likes