I’m working on an autoencoder that takes a set (of variable length) of vectors as input and reporduces it. I wanted to use LSTM or GRU to do so but I encountered a problem while building the decoder model.
The decoder model is supposed to be a one-to-many RNN since it takes only one code as input and outputs a sequence. But the LSTM, as described in the documentation, only takes sequences as input. So it’s not possible to advance the sequence decoding.
So is there a way to build a one-to-many model with the pytorch LSTM/GRU or do I have to go back to the old fashioned way ?
Thanks a lot