Hi there, I am using a bidirectional vanilla rnn for sentiment analysis. The way I am sending in data to the rnn is like this [batch_size, 1, seq_len]
. For example, one batch contains 3 tensors of dimension [1, m]
. The m relates to the sentence with the max length i.e most amount of words in that batch. This tensor contains the information of a single sentence. For example lets say I have a sentence like: this is so cool then this sentence becomes something like [2, 5, 11, 21, 1, 1, 1, 1, 1]
. So the indices in the tensor correspond to the indices of the words in my vocabulary dictionary. For example the word this has an index of 2 is has an index of 5 so on and so forth. And the 1s represent the index for the ‘pad’ token as every sentence is padded if it is shorter than the longest sentence in the batch. And all the sentences have a label (either 0 for positive or 1 for negative). For example the tensor for the labels for a particular batch might look like this [0, 1, 1]
.
I am having trouble getting my desired output. Correct me if I am wrong but I should be getting an output that has 6 values right? 3 for the forward pass and 3 for the backward pass since this is a bidirectional rnn and since I have a batch size of 3 and there are 3 labels in each batch. However I am getting the final prediction as a tensor of shape [1, 9, 1]
.
Here is my code:
data = torch.tensor([[[2, 5, 11, 21, 1, 1, 1, 1, 1]],
[[6, 0, 10, 6, 0, 1, 1, 1, 1]],
[[9, 15, 16, 4, 0, 17, 0, 10, 18]]], dtype=torch.long)
labels = torch.tensor([0.,0.,1.])
input_seq = data
print(input_seq, input_seq.shape)
batch_size = inpt.shape[0]
seq_len = inpt.shape[-1]
print(batch_size)
print(seq_len)
INPUT_DIM = 22
EMBEDDING_DIM = 5
HIDDEN_DIM = 20
OUTPUT_DIM = 1
embeds = nn.Embedding(INPUT_DIM, EMBEDDING_DIM)
rnn = nn.RNN(EMBEDDING_DIM, HIDDEN_DIM, batch_first=True, bidirectional=True)
inputs = torch.zeros((batch_size, seq_len, EMBEDDING_DIM))
for i in range(inpt.shape[0]):
inputs[i] = embeds(inpt[i])
output, hx = rnn(inputs)
print(hx.shape)
print(output.shape)
fc = nn.Linear(HIDDEN_DIM * 2, OUTPUT_DIM)
forward_output = output[:-2, :, :HIDDEN_DIM]
reverse_output = output[2:,:, HIDDEN_DIM:]
staggered_output = torch.cat((forward_output, reverse_output), dim=-1)
predictions = fc(staggered_output)
print(predictions)
print(predictions.shape)
I know that this probably isn’t the best way to do sentiment analysis. But I am new to all of this and am just experimenting to learn more. Any help is appreciated. Thanks