Apply Linear layers after LSTM

I’m trying to implement a model which I would describe as an LSTM autoencoder although I’m not sure if it strictly meets a definition of one. I’m trying to make a model which can learn to represent a set of variable length sequences as fixed length vectors.

The forward function should take in a list (batch) of lists (sequences) of lists of floats.
the torch.nn.utils.rnn.pack_sequence function is then used on this input before feeding it to an LSTM to encode it to a fixed length representation. Unlike a seq2seq model, I actually want a bottleneck, since the whole point of my model is obtaining a fixed-length representation. Therefore, I take only the hidden state as input to the decoder LSTM. (Also since the encoder LSTM can have multiple layers, I only take the last layer.) I actually use repeat to repeat the hidden states for each example in the batch to its appropriate length and then use pack_sequence again before feeding this into the decoder LSTM. LSTMs in pytorch have 3 components to their output : output, hidden state, and cell state. For the output of the decoder, I take the “output” component.

I would like to perform a nn.Linear layer to each timestep of the decoder output. The problem is that the output component of the decoder LSTM is a PackedSequence object.
Is there a good way to do this? I wish that the Linear layer could just be applied to PackedSequence objects. What is the alternative? I can post my code below if it’s helpful. It’s somewhat minimal.