LSTM Autoencoders for variable-length input in pytorch

Hi everyone, I’m trying to implement a LSTM autoencoder in pytorch for variable-length input. I have noticed that there are several implementations of LSTM autoencoders: Implementing an Autoencoder in PyTorch | by Abien Fred Agarap | PyTorch | Medium and LSTM Autoencoders in pytorch, however I have tried and they don’t work when the input is of variable-length (here variable-length means that the sequence length may be different). But I am not sure how to do this. Does anyone know how to solve this ? Thanks a lot.

Hello,
Having variable length input should be not issue for the encoder, as long as all dimensions of the input are fixed, except for the temporal one. However, the question is: How do we implement a variable output decoder?

The code which you are referring to uses a linear layer for the output of the decoder. Hence, the output size is fixed. As you want the output of the autoencoder to match the input, this is most likely the issue that you will run into.

How should we implement variable sized output?

You could “provide omniscient” information to the model and instruct it on how many timesteps the output should produce for each sample. You could also define your output shape as the maximum length in your dataset and define some output value as an “ignore token”.

I am not aware of a standard way to achieve this. Multiple methods come to mind, but none of them sit well with me.

Could you provide a short code, running your model on dummy data, in order to reproduce your issue? What are the errors you are getting when trying to implement the method? Perhaps try searching for variable output size models.

Hi, I don’t have any code right now except for the links on my original post. I tried to change the sequence length but it did not work. I’m not sure how to make a variable-length decoder, maybe I can use shared weight between the encoder and decoder LSTM ?

Not 100% sure what you mean here. The encoder and decoder, in principle, do not care about the length of the sequences. What exactly is the problem?

  • Have sequences in the same batch different lengths that causes issues?
  • Do an input sequences and the corresponding output sequence have different length? (although this wouldn’t be an auto-encoder)