I am making a simple Variational autoencoder with LSTM’s where I want to take a time series as the input and generate the same time series as the output.
I am confused with the decoder part - I feed it with the sampled latent vectors and as the LSTM output I get hidden_size number of features per each time point. My question is how to connect this with a Linear layer whose output would be size 1, but per each time point, with shared weights?
Something like TimeDistributed(Dense(1)) in keras.
Here is example code:
class Decoder(nn.Module): """Converts latent vectors into the output time series.""" def __init__(self, ..): super(Decoder, self).__init__() ... self.lstm = nn.LSTM(latent_length, self.hidden_size) def forward(self, latent): .... decoder_output, _ = self.lstm(decoder_inputs)
decoder_output is of shape (sequence_length, n_batches, hidden_size) and I would like to connect it to a linear layer so that the output is (n_batches, n_input_features=1). Does someone know how to this correctly?