I want to build an LSTM model which takes a state S0 as the input and then the output is a sequence of S1, S2, … Sn. The length of the output sequence is variable. Three more points:
1- Each state in the sequence depends on the previous one. So, let’s say S1 leads to S2, then S2 leads to S3 and at some point, it should be a possibility to make a decision to stop for example at the Sn state.
Or alternatively, the output sequence can be set to a fixed value and after some point, the state should not change (like Sn leads to Sn again). Which one is easier to implement and how?
2- Ideally, we should be able to start from S2 and get S3 and so on. I guess this behavior is similar to return_sequences =True flag in Keras. Should I train the network on all possible subsequences? or is there a way to learns this only from the main sequence?
3- Each state has a vector with a dimension of 100. The first 20 dimensions (let’s call it ID) are fixed through a sequence (IDs are different from each other however it should stay unchanged during the sequences). How is possible to keep this fixed within LSTM?