DAN / IndexError: Target 1 is out of bounds

Hi, I am trying to implement deep averaging network with GloVe embedding. My code keeps having problem with loss function. I tried to check both y and log_probs shape, it doesn’t seem to have problems. Could you please help me look into this?

num_epoch = 100
emb_size = 128
num_classes = 1

DAN = NeuralSentimentClassifier(word_embeddings.get_embedding_length(), emb_size, num_classes, word_embeddings)
optimizer = optim.Adam(DAN.parameters(), lr=0.1)
loss_function = nn.NLLLoss()

for epoch in range(num_epoch):
    #total_loss = 0.0
    for se in train_exs:
        #1# Prepare for inputs
        x = []
        for word in se.words:
            x.append(word.lower())
        y = torch.from_numpy(np.asarray(se.label)).long()
        #2# Zero the weights             
        DAN.zero_grad()
        #3# Forward the embedding
        log_probs = DAN.forward(x)
        # print(y_onehot)
        #4# Calculate Loss
        loss = loss_function(log_probs, y)
        #5# Backprop and update loss
        loss.backward()
        optimizer.step()

@Asonjay what is the exact error that you are getting?

Thank you so much for your reply! The error comes from

loss = loss_function(log_probs, y)

It prompts that:

Traceback (most recent call last):
File “C:\Users\Jason\Dropbox\Academic\SP2022\CSE5525\A2\a2-distrib\neural_sentiment_classifier.py”, line 106, in
model = train_deep_averaging_network(args, train_exs, dev_exs, word_embeddings)
File “C:\Users\Jason\Dropbox\Academic\SP2022\CSE5525\A2\a2-distrib\models.py”, line 112, in train_deep_averaging_network
loss = loss_function(log_probs, y)
File “C:\Users\Jason\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py”, line 1102, in _call_impl
return forward_call(*input, **kwargs)
File “C:\Users\Jason\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\loss.py”, line 211, in forward
return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
File “C:\Users\Jason\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\functional.py”, line 2532, in nll_loss
return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 1 is out of bounds.

nn.NLLLoss is used for a multi-class classification (or segmentation), such that num_classes = 1 wouldn’t make sense, since your model is only predicting a single class (class0).
Since your target tensor contains (at least) the index 1, you are dealing with (at least) 2 classes and would need to use num_classes=2 such that the model output has the shape [batch_size, nb_classes=2].

1 Like

Thank you so much for your reply! I definitely need to check the signature of the loss function clearly. Plus, I am using imported GloVe data to train my model. How does nn.Embedding.from_pretrained work? Can I just feed the GloVe data (word, embedding…) directly into this?

from_pretrained expects the weight tensor in the shape [num_embeddings, embedding_dim]. Afterwards you can pass the work indices to this layer and will get the dense outputs.