Shape of tensor changes after slicing. RuntimeError: stack expects each tensor to be equal size, but got [32, 1] at entry 0 and [32, 0] at entry 1

I have a very large tensor of shape (512,3,224,224). I input it to model in batches of 32 and I then save the scores corresponding to the target label which is 2. in each iteration, after every slice, the shape of scores changes. Which leads to the following error. What am I doing wrong and how to fix it.
label = torch.ones(1)*2

def sub_forward(self, x):
    x = self.vgg16(x)
    x = self.bn1(x)
    x = self.linear1(x)
    x = self.linear2(x)
    return x

def get_scores(self, imgs, targets):
    b, _, _, _ = imgs.shape
    batch_size = 32
    total_scores = []
    for i in range(0, b, batch_size):
      scores = self.sub_forward(imgs[i:i+batch_size,:,:,:])
      scores = F.softmax(scores)
      labels = targets[i:i+batch_size]
      labels = labels.long()
      scores = scores[:,labels]
      print(i," scores: ", scores)
      total_scores.append(scores)
      print(i," total_socres: ", total_scores)
    total_scores = torch.stack(total_scores)
    return scores
0  scores:  tensor([[0.0811],
        [0.0918],
        [0.0716],
        [0.1680],
        [0.1689],
        [0.1319],
        [0.1556],
        [0.2966],
        [0.0913],
        [0.1238],
        [0.1480],
        [0.1215],
        [0.2524],
        [0.1283],
        [0.1603],
        [0.1282],
        [0.2668],
        [0.1146],
        [0.2043],
        [0.2475],
        [0.0865],
        [0.1869],
        [0.0860],
        [0.1979],
        [0.1677],
        [0.1983],
        [0.2623],
        [0.1975],
        [0.1894],
        [0.3299],
        [0.1970],
        [0.1094]], device='cuda:0')
0  total_socres:  [tensor([[0.0811],
        [0.0918],
        [0.0716],
        [0.1680],
        [0.1689],
        [0.1319],
        [0.1556],
        [0.2966],
        [0.0913],
        [0.1238],
        [0.1480],
        [0.1215],
        [0.2524],
        [0.1283],
        [0.1603],
        [0.1282],
        [0.2668],
        [0.1146],
        [0.2043],
        [0.2475],
        [0.0865],
        [0.1869],
        [0.0860],
        [0.1979],
        [0.1677],
        [0.1983],
        [0.2623],
        [0.1975],
        [0.1894],
        [0.3299],
        [0.1970],
        [0.1094]], device='cuda:0')]
32  scores:  tensor([], device='cuda:0', size=(32, 0))
32  total_socres:  [tensor([[0.0811],
        [0.0918],
        [0.0716],
        [0.1680],
        [0.1689],
        [0.1319],
        [0.1556],
        [0.2966],
        [0.0913],
        [0.1238],
        [0.1480],
        [0.1215],
        [0.2524],
        [0.1283],
        [0.1603],
        [0.1282],
        [0.2668],
        [0.1146],
        [0.2043],
        [0.2475],
        [0.0865],
        [0.1869],
        [0.0860],
        [0.1979],
        [0.1677],
        [0.1983],
        [0.2623],
        [0.1975],
        [0.1894],
        [0.3299],
        [0.1970],
        [0.1094]], device='cuda:0'), tensor([], device='cuda:0', size=(32, 0))]

`

RuntimeError: stack expects each tensor to be equal size, but got [32, 1] at entry 0 and [32, 0] at entry 1

`

It seems that this line gives 0 results in 2nd iteration. What does labels contain?

I forward label to get_scores function
label = torch.ones(1)*2
get_scores(self, imgs, label)

Something is not right.

You won’t be able to index targets if you define label as label = torch.ones(1)*2.

well that works fine for me. later I will be having a much larger tensor of targets

Interesting.

x = torch.tensor(1)*2 
x[0]

To me, this gives invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number in pytorch 1.3.1.

Interesting indeed.
For the following code I get no error

label = torch.ones(1)*2
print(label[32:64])

It gives the following result:
`

tensor()

`
I am using google colab atm

This explains why there are 0 results.

can you check the pytorch version?
print(torch.__version__)

PyTorch Version:  1.5.0+cu101
Torchvision Version:  0.6.0+cu101

(What to do now?)

Are there any alternatives to this?

@ptrblck any suggestions?

Slicing “out of bounds” will return an empty tensor, which is expected.
It seems you are forwarding targets, not labels to get_scores and calculate labels as:

labels = targets[i:i+batch_size]

Could you check the shape of targets before the slicing operation and labels afterwards?

def get_scores(self, imgs, targets):
    b, _, _, _ = imgs.shape
    batch_size = 32
    total_scores = []
    for i in range(0, b, batch_size):
      scores = self.sub_forward(imgs[i:i+batch_size,:,:,:])
      scores = F.softmax(scores)
      print("targets: ",targets)
      labels = targets[i:i+batch_size]
      print("labels: ",labels)
      labels = labels.long()
      scores = scores[:,labels]
      print(i," scores: ", scores)
      total_scores.append(scores)
      print(i," total_socres: ", total_scores)
    total_scores = torch.stack(total_scores)
    return scores

targets:  tensor([2.])
labels:  tensor([2.])
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:38: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
0  scores:  tensor([[0.5671],
        [0.5106],
        [0.5338],
        [0.5548],
        [0.5764],
        [0.5593],
        [0.6007],
        [0.5891],
        [0.5425],
        [0.6006],
        [0.6386],
        [0.5034],
        [0.5746],
        [0.5831],
        [0.5794],
        [0.6206],
        [0.2900],
        [0.4843],
        [0.6075],
        [0.4473],
        [0.5612],
        [0.2937],
        [0.6049],
        [0.5867],
        [0.3828],
        [0.4014],
        [0.4806],
        [0.5595],
        [0.5749],
        [0.5216],
        [0.3748],
        [0.5434]], device='cuda:0')
0  total_socres:  [tensor([[0.5671],
        [0.5106],
        [0.5338],
        [0.5548],
        [0.5764],
        [0.5593],
        [0.6007],
        [0.5891],
        [0.5425],
        [0.6006],
        [0.6386],
        [0.5034],
        [0.5746],
        [0.5831],
        [0.5794],
        [0.6206],
        [0.2900],
        [0.4843],
        [0.6075],
        [0.4473],
        [0.5612],
        [0.2937],
        [0.6049],
        [0.5867],
        [0.3828],
        [0.4014],
        [0.4806],
        [0.5595],
        [0.5749],
        [0.5216],
        [0.3748],
        [0.5434]], device='cuda:0')]
targets:  tensor([2.])
labels:  tensor([])
32  scores:  tensor([], device='cuda:0', size=(32, 0))
32  total_socres:  [tensor([[0.5671],
        [0.5106],
        [0.5338],
        [0.5548],
        [0.5764],
        [0.5593],
        [0.6007],
        [0.5891],
        [0.5425],
        [0.6006],
        [0.6386],
        [0.5034],
        [0.5746],
        [0.5831],
        [0.5794],
        [0.6206],
        [0.2900],
        [0.4843],
        [0.6075],
        [0.4473],
        [0.5612],
        [0.2937],
        [0.6049],
        [0.5867],
        [0.3828],
        [0.4014],
        [0.4806],
        [0.5595],
        [0.5749],
        [0.5216],
        [0.3748],
        [0.5434]], device='cuda:0'), tensor([], device='cuda:0', size=(32, 0))]
targets:  tensor([2.])
labels:  tensor([])
64  scores:  tensor([], device='cuda:0', size=(32, 0))
64  total_socres:  [tensor([[0.5671],
        [0.5106],
        [0.5338],
        [0.5548],
        [0.5764],
        [0.5593],
        [0.6007],
        [0.5891],
        [0.5425],
        [0.6006],
        [0.6386],
        [0.5034],
        [0.5746],
        [0.5831],
        [0.5794],
        [0.6206],
        [0.2900],
        [0.4843],
        [0.6075],
        [0.4473],
        [0.5612],
        [0.2937],
        [0.6049],
        [0.5867],
        [0.3828],
        [0.4014],
        [0.4806],
        [0.5595],
        [0.5749],
        [0.5216],
        [0.3748],
        [0.5434]], device='cuda:0'), tensor([], device='cuda:0', size=(32, 0)), tensor([], device='cuda:0', size=(32, 0))]
targets:  tensor([2.])
labels:  tensor([])
96  scores:  tensor([], device='cuda:0', size=(32, 0))
96  total_socres:  [tensor([[0.5671],
        [0.5106],
        [0.5338],
        [0.5548],
        [0.5764],
        [0.5593],
        [0.6007],
        [0.5891],
        [0.5425],
        [0.6006],
        [0.6386],
        [0.5034],
        [0.5746],
        [0.5831],
        [0.5794],
        [0.6206],
        [0.2900],
        [0.4843],
        [0.6075],
        [0.4473],
        [0.5612],
        [0.2937],
        [0.6049],
        [0.5867],
        [0.3828],
        [0.4014],
        [0.4806],
        [0.5595],
        [0.5749],
        [0.5216],
        [0.3748],
        [0.5434]], device='cuda:0'), tensor([], device='cuda:0', size=(32, 0)), tensor([], device='cuda:0', size=(32, 0)), tensor([], device='cuda:0', size=(32, 0))]

Should I use an if condition before slicing to see if I’m slicing out of bounds?

I’m not sure if a condition will save you, since you expect targets to have at least the length b+batch_size, while it seems to have a single element.
I would recommend to take another look, how targets is defined and why it’s smaller than you expect.

1 Like

you’re right. In my code, I’m repeating input images but I didn’t do that for the labels. Thank you so much for your time. Can you look at the other question I posted after this