I am trying to classify sentences into multiple classes. Each sentence is converted into a tensor after creating word2index.
The input tensor is a sentence. After converting each word’s index to tensor, the sentence looks like this: tensor([1407995, 937957, 936279, 904725, 682273, 1291222, 523149, 566120, 913504])
The issue I am getting is ValueError: Expected input batch_size (9) to match target batch_size (1).
This I think is because my output(target label) is only a single label(whose index I have after creating word2index dictionary) but my input is actually the tensor value of each word in the sentence since the sentence length is also 9.
The error occurs at loss = loss_cf(label_pred, target_label)
where loss_cf = nn.NLLLoss()
The output of the fully connected layer after the LSTM is :
class_val = self.softmax(fc_layer)
Also (fc_layer): Linear(in_features=64, out_features=100, bias=True)
There are 100 different classes.
I am using an LSTM and I think the issue has to do with how I encode the target label which in this case is a single index value of the particular target label. I need help in figuring this out.