RNN-BiLSTM sentiment analysis low accuracy

I’m using PyTorch with a training set of movie reviews each labeled positive or negative. Every review is truncated or padded to be 60 words and I have a batch size of 32. This 60x32 Tensor is fed to an embedding layer with an embedding dim of 100 resulting in a 60x32x100 Tensor. Then I use the unpadded lengths of each review to pack the embedding output, and feed that to a BiLSTM layer with hidden dim = 256 .

I then pad it back, apply a transformation (to try to get the last hidden state for the forward and backward directions) and feed the transformation to a Linear layer which is 512x1. Here is my module, I pass the final output through a sigmoid not shown here

class RNN(nn.Module):
    def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, 
                 bidirectional, dropout, pad_idx):

        super().__init__()
        self.el = nn.Embedding(vocab_size, embedding_dim)
        print('vocab size is ', vocab_size)
        print('embedding dim is ', embedding_dim)
        self.hidden_dim = hidden_dim
        self.num_layers = n_layers # 2
        self.lstm = nn.LSTM(input_size=embedding_dim, hidden_size=hidden_dim, num_layers=n_layers, dropout=dropout, bidirectional=bidirectional)
        # Have an output layer for outputting a single output value
        self.linear = nn.Linear(2*hidden_dim, output_dim)

    def init_hidden(self):
        return (torch.zeros(self.n_layers*2, 32, self.hidden_dim).to(device), 
                torch.zeros(self.n_layers*2, 32, self.hidden_dim).to(device))

    def forward(self, text, text_lengths):
        print('input text size ', text.size())
        embedded = self.el(text)
        print('embedded size ', embedded.size())
        packed_seq = torch.nn.utils.rnn.pack_padded_sequence(embedded, lengths=text_lengths, enforce_sorted=False)
        packed_out, (ht, ct) = self.lstm(packed_seq, None)
        out_rnn, out_lengths = torch.nn.utils.rnn.pad_packed_sequence(packed_out)
        print('padded lstm out ', out_rnn.size())        
        #out_rnn = out_rnn[-1] #this works
        #out_rnn = torch.cat((out_rnn[-1, :, :self.hidden_dim], out_rnn[0, :, self.hidden_dim:]), dim=1) # this works
        out_rnn = torch.cat((ht[-1], ht[0]), dim=1) #this works
        #out_rnn = out_rnn[:, -1, :] #doesn't work maybe should
        print('attempt to get last hidden ', out_rnn.size())
        linear_out = self.linear(out_rnn)
        print('after linear ', linear_out.size())
        return linear_out

I’ve tried 3 different transformations to get the dimensions correct for the linear layer

out_rnn = out_rnn[-1] #this works
out_rnn = torch.cat((out_rnn[-1, :, :self.hidden_dim], out_rnn[0, :, self.hidden_dim:]), dim=1) # this works
out_rnn = torch.cat((ht[-1], ht[0]), dim=1) #this works

These all produce an output like this

input text size torch.Size([60, 32])

embedded size torch.Size([60,32, 100])

padded lstm out torch.Size([36, 32, 512])

attempt to get last hidden torch.Size([32, 512])

after linear torch.Size([32, 1])

I would expect the padded lstm out to be [60, 32, 512] but it is always less than 60 in the first dimension.

I’m training for 10 epochs with optim.SGD and nn.BCEWithLogitsLoss() . My training accuracy is always around 52% and test accuracy is always at like 50%, so the model is doing no better than randomly guessing. I’m sure that my data is being handled correctly in my tochtext.data.Dataset . Am I forwarding my tensors along incorrectly?

I have tried using batch_first=True in my lstm, packed_seq function, and pad_packed_seq function and that breaks my transformations before feeding to the linear layer.
I’ve also tried without the pack/pad functions and get the same results.

An accuracy of around 50% for two classes just means that your classifier is guessing, i.e., has not learned anything.

I would first simplify the model as much as, particularity not using a bi-directional LSTM. out_rnn = out_rnn[-1] is only fully correct when you have only one direction. Just because #this works doesn’t mean it’s correct. I would also use just one layer. You can also drop the packing at first. Once a barebone classifier is learning something, then you can add complexity.

For classification I usually use nn.NLLLoss in combination with log_softmax. Maybe you can try that way.

Thank you, I’m using just 1 layer now and getting better results