Understanding LSTM input

Hey @Chris_Oosthuizen,

thanks for your feedback! I’m glad my answer helped you!

I note that both x and y are returning the same item.

You are right! To be honest, when I wrote this code my goal was just to give an example on how to prepare the input for a LSTM network properly. Moreover, at the time I was working with recurrent autoencoders, so what I wanted was not to predict the “future”, but to better represent the “present” using information from the “past”. For this reason x and y are returning the same item!

Shouldn’t y be the next item in the series to train LSTM.

I’m struggling to get guidance on what the shape of target should be.

If I have understood what you are trying to implement, I suggest you to look at this repository, and at this post (LSTM time sequence generation).
I hope it is close to what you are looking for!

I’ve trying to amend this code to return x and y but getting errors everywhere.

This is not really my area of expertise. However, if you have still troubles after having read the discussion here, try to post your code and your errors on this thread or on a separate topic on the forum, and we can try to fix them!

Cheers :slight_smile: