Hi everybody ! I’m currently building LSTM auto_encoder and i use linear layer.
Linear layer need a flat vector in input but i got a batch_size so i dont know how to handle it. Let’s see the code to be more precise
In init
INPUT_FC = self.sequence_len * self.hidden_dim
self.fc = nn.Sequential(
nn.Linear(in_features=INPUT_FC, out_features=self.hidden_dim // 2),
nn.ReLU(inplace=False),
nn.Linear(in_features=self.hidden_dim // 2, out_features=self.latent_dim),
)
in forward method :
# SIZE(output) = (seq_len, batch_size, num_directions * hidden_dim) if batch_first=False
# SIZE(output) = (batch_size,seq_len, num_directions * hidden_dim) if batch_first=True
# here num_directions = 1 because it's times series (times is ordonned)
# SIZE(h_embeded) = SIZE(c_embeded) = (batch_size,hidden_dim)
# output = output.view(-1)
# fc need a flat vector
output = output.view(1, output.size(1) * output.size(2))
lattent_vector = self.fc(output)
# fc take seq_len*hidden_dim
Thanks you !