How to handle variable length sentences with lstm?

Hi,

Is there anything wrong with the below code? the sentence is already padded to a maximum length.

         embedded = self.embedding(sentence)
        # Each batch has the same maxlen, how to make data loader with custom maxlen?
        input_lengths = [sentence.shape[1]]* sentence.shape[0] 
        packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths, batch_first=True)
        output, hidden = self.text_LSTM(packed, None)
        output, _ = torch.nn.utils.rnn.pad_packed_sequence(output, batch_first=True)

However my model is not learning with the final representation to:

output[:,-1,:]

extracted from the lstm, i have fixed the input lengths already (since sentences are padded).