Single vs Batch Prediction: IndexError: Dimension out of range

I get this error calculating the softmax() for my batch predictions:

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Here’s code showing my correct predictions for single sentences. The print() statements indicate that I’m not handing the batch output tensor arrays properly.


# Skipping code so assume model=BertForSequenceClassification and tokenizer = BertTokenizerFast
sentences = ["Eat an apple", "Eat an orange"]

inputs_0 = tokenizer(sentences[0], return_tensors="pt")
inputs_1 = tokenizer(sentences[1], return_tensors="pt")
inputs_all = tokenizer(sentences, return_tensors="pt")

outputs_0 = model(**inputs_0)
outputs_1 = model(**inputs_1)
outputs_all = model(**inputs_all)


prediction_0 = outputs_0[0].softmax(1).argmax().item()
prediction_1 = outputs_1[0].softmax(1).argmax().item()
prediction_all_0 = outputs_all[0][1].softmax(1).argmax().item()
prediction_all_1 = outputs_all[0][1].softmax(1).argmax().item()


tensor([[-2.7620,  2.9584]], grad_fn=<AddmmBackward0>)
tensor([[-2.7471,  2.9272]], grad_fn=<AddmmBackward0>)

tensor([[-2.7620,  2.9584], [-2.7471,  2.9272]], grad_fn=<SliceBackward0>)
tensor([-2.7620,  2.9584], grad_fn=<SelectBackward0>)
tensor([-2.7471,  2.9272], grad_fn=<SelectBackward0>)
IndexError                                Traceback (most recent call last)
Input In [278], in <cell line: 7>()
      5 prediction_0 = outputs_0[0].softmax(1).argmax().item()
      6 prediction_1 = outputs_1[0].softmax(1).argmax().item()
----> 7 prediction_all_0 = outputs_all[0][1].softmax(1).argmax().item()
      8 prediction_all_1 = outputs_all[0][1].softmax(1).argmax().item()

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

In this line of code:


you are expecting outputs_all[0][1] to have at least 2 dimensions, which is not the case.
Check your indexing logic and make sure you are working on the desired dimensions.

Yes, @ptrblck. That’s what I’m doing wrong. From what I’ve seen others do, my inputs to tokenizer() are correct. I.e. an array of sentences. So seems to me the issue is how I’m handling the output of model().

When passing the two sentences one-at-a-time, I get this
tensor([[-2.7620, 2.9584]], grad_fn=)
tensor([[-2.7471, 2.9272]], grad_fn=)

So I would expect passing an array of sentences to return this. I.e. an array of the prior results like below:
tensor([[[-2.7620, 2.9584]], [[-2.7471, 2.9272]]] , grad_fn=< SliceBackward0 >)

So I’m still confused how to access each result individually. Wondering if the change from “AddmmBackward0” to “SliceBackward0” is a hint as to what I’m doing wrong.=

Thanks for the encouragement, @ptrblck. Found my answer to be that I needed to properly slice the 2d tensor. I still have much more to learn about Tensors as now the grad_fn has changed yet again.


prediction_all_0 = outputs_all[0][0:1, 0:2].softmax(1).argmax().item()
print(outputs_all[0][0:1, 0:2])


tensor([[-2.7620,  2.9584]], grad_fn=<SliceBackward0>)