IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Hi,

I use training code of

      model.zero_grad()
      out = model()
      print(y)
      print(out)
      loss = criterion(out, y)
      loss.backward(retain_graph = True)
      optimizer.step()

This code outputs as follows. (y is onehot encoded label)

[0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
tensor([0.0303, 0.0090, 0.0182, 0.2649, 0.2079, 0.0842, 0.4543, 0.0255, 0.0294,
        0.8613], grad_fn=<MulBackward0>)

I meet an error of;

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-110-56f054f4ae5c> in <module>()
     21       print(y)
     22       print(out)
---> 23       loss = criterion(out, y)
     24       loss.backward(retain_graph = True)
     25       optimizer.step()

3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
    940     def forward(self, input, target):
    941         return F.cross_entropy(input, target, weight=self.weight,
--> 942                                ignore_index=self.ignore_index, reduction=self.reduction)
    943 
    944 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2054     if size_average is not None or reduce is not None:
   2055         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2056     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2057 
   2058 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in log_softmax(input, dim, _stacklevel, dtype)
   1348         dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
   1349     if dtype is None:
-> 1350         ret = input.log_softmax(dim)
   1351     else:
   1352         ret = input.log_softmax(dim, dtype=dtype)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Does this mean of that loss function expects scalar “out” ?
(NOT yet use softmax for “out” on output-layer)

The shape of out is expected to be [batch_size, nb_classes], while yours seems to be only [batch_size]. If you are dealing with a binary classification use case, you could use nn.BCEWithLogitsLoss (or nn.BCELoss, if you already applied sigmoid on your output).

2 Likes

@ptrblck -san,

Thank you for your replying. I now use nn.BCEWithLogitsLoss instead of nn.BCELoss because bellow site explains the nn.BCELoss is sometime unstable;

http://37ma5ras.blogspot.com/2017/12/loss-function.html

After that I meet error again of different matter;

TypeError                                 Traceback (most recent call last)
<ipython-input-10-8d31dc232b4b> in <module>()
     22       print(out)
     23 
---> 24       loss = criterion(out, y)
     25       loss.backward(retain_graph = True)
     26       optimizer.step()

2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight)
   2158         reduction_enum = _Reduction.get_enum(reduction)
   2159 
-> 2160     if not (target.size() == input.size()):
   2161         raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
   2162 

TypeError: 'int' object is not callable

label is;

[0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]

and output from model is;

tensor([8.9153e-01, 9.3097e-01, 9.7947e-04, 9.9783e-01, 4.7369e-03, 9.7412e-01,
        1.2952e-01, 1.8061e-01, 4.7231e-01, 8.2431e-01],
       grad_fn=<MulBackward0>)

So, both is float, so seems other some arg being set int?

It seems as if one of the passed tensors was passed as an int instead.
Could you check that?
Also, based on your print statement I’m not sure, if label is a list or a tensor.
Anyway, it should be a tensor with the same shape as your model’s output.

@ptrblck -san,

Thank you very much for your advice. I checked again the code, then I found that I use constant of “1” not “1.” in one hot encoding. Now my model is training!

This is a very confusing error message:

Exception has occurred: IndexError
Dimension out of range (expected to be in range of [-1, 0], but got 1)

What does it mean for a dimension to be in the range of [-1,0]?


Anyway, I just have 2 tensors and I want to put them right next to each other in a new tensor:

x_proc1
tensor(-0.9214, grad_fn=<AddBackward0>)
x_proc2
tensor(-1., grad_fn=<AddBackward0>)
x = [x_proc1,x_proc2]
x_proc = torch.stack(x, 1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

The error message points you towards valid indices for a single dimension, which are -1 and 0:

x = torch.randn(1)
print(x[0])
print(x[-1])
print(x[1]) # error

You are passing a list of scalars to torch.stack, which cannot stack them in dim1.
What shape do you expect in x_proc?

hello @ptrblck, i am working on a project related to person re-identification. I am trying to re-implement that code of one of the CVPR paper entitled “ABD-Net: Attentive but Diverse Person Re-Identification”. I trained the ABD-Net architecture with resnet and densenet but when i am trying to train the architecture using shufflenet backbone I face this error. Could you please help me…

=================================================================

File “train.py”, line 147, in main
train(epoch, model, criterion, regularizer, optimizer, trainloader, use_gpu, fixbase=True)
File “train.py”, line 246, in train
loss = criterion(outputs, pids)
File “C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “F:\hayat ullah work\Attention Code\paper 2_new code\torchreid\losses\cross_entropy_loss.py”, line 56, in forward
return self._forward(inputs[1], targets)
File “F:\hayatullah work\Attention Code\paper 2_new code\torchreid\losses\cross_entropy_loss.py”, line 52, in _forward
return sum([self.apply_loss(x, targets) for x in inputs_tuple]) / len(inputs_tuple)
File “F:\hayat ullah work\Attention Code\paper 2_new code\torchreid\losses\cross_entropy_loss.py”, line 52, in
return sum([self.apply_loss(x, targets) for x in inputs_tuple]) / len(inputs_tuple)
File “F:\hayat ullah work\Attention Code\paper 2_new code\torchreid\losses\cross_entropy_loss.py”, line 32, in apply_loss
log_probs = self.logsoftmax(inputs)
File “C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\activation.py”, line 1179, in forward
return F.log_softmax(input, self.dim, _stacklevel=5)
File “C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py”, line 1350, in log_softmax
ret = input.log_softmax(dim)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

It seems your code uses nn.CrossEntropyLoss (a custom implementation?) at one point, which calls into F.log_softmax(input, dim).
The input seems to have a single dimension, while dim is set to 1, which will yield the error posted in my previous code snippet.

Check the activation tensor in your model and make sure it has the expected number of dimensions.
For a multi-class classification, your model output would have two dimensions as [batch_size, nb_classes].

Hey
I am having the same error but I am not using Binary classification, what should I do?
I have 3 classes
Can you please help me out
Here is my code

Classes

class dataset(Dataset):
    def __init__(self):
        self.tf=TfidfVectorizer(max_df=0.99, min_df=0.005)
        self.x=self.tf.fit_transform(corpus).toarray()
        self.y=list(df.review)
        self.x_train,self.x_test,self.y_train,self.y_test=train_test_split(self.x,self.y,test_size=0.2)
        self.token2idx=self.tf.vocabulary_
        self.idx2token = {idx: token for token, idx in self.token2idx.items()}
        print(self.idx2token)
    
    def __getitem__(self,i):
        return self.x_train[i, :], self.y_train[i]
    
    def __len__(self):
        return self.x_train.shape[0]


class classifier(nn.Module):
    def __init__(self,vocab_size,hidden1,hidden2):
        super(classifier,self).__init__()
        self.fc1=nn.Linear(vocab_size,hidden1)
        self.fc2=nn.Linear(hidden1,hidden2)
        self.fc3=nn.Linear(hidden2,1)
    def forward(self,inputs):
        x=F.relu(self.fc1(inputs.squeeze(1).float()))
        x=F.relu(self.fc2(x))
        return self.fc3(x)

Training Loop

epochs=10
total=0
model.train()
for epoch in tqdm(range(epochs)):
    progress_bar=tqdm_notebook(train_loader,leave=False)
    losses=[]
    correct=0
    for inputs,target in progress_bar:
        model.zero_grad()
        output=model(inputs)
        print(output.squeeze().shape)
        print(target.shape)
        loss=criterion(output.squeeze(),target.float())
        loss.backward()
        nn.utils.clip_grad_norm_(model.parameters(), 3)
        optim.step()
        correct += (output == target).float().sum()
        progress_bar.set_description(f'Loss: {loss.item():.3f}')
        losses.append(loss.item())
        total += 1
    epoch_loss = sum(losses) / total
    train_losses.append(epoch_loss)   
    tqdm.write(f'Epoch #{epoch + 1}\tTrain Loss: {epoch_loss:.3f}\tAccuracy: {correct/output.shape[0]}')

Error

IndexError                                Traceback (most recent call last)
<ipython-input-78-6b86c97bcabf> in <module>
     14         print(output.squeeze().shape)
     15         print(target.shape)
---> 16         loss=criterion(output.squeeze(),target.float())
     17         loss.backward()
     18         nn.utils.clip_grad_norm_(model.parameters(), 3)

~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

~\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
    930     def forward(self, input, target):
    931         return F.cross_entropy(input, target, weight=self.weight,
--> 932                                ignore_index=self.ignore_index, reduction=self.reduction)
    933 
    934 

~\Anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2315     if size_average is not None or reduce is not None:
   2316         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2317     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2318 
   2319 

~\Anaconda3\lib\site-packages\torch\nn\functional.py in log_softmax(input, dim, _stacklevel, dtype)
   1533         dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
   1534     if dtype is None:
-> 1535         ret = input.log_softmax(dim)
   1536     else:
   1537         ret = input.log_softmax(dim, dtype=dtype)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Remove the squeeze() operation on the output tensor, as it’ll remove the class dimension.
Also note that nn.CrossEntropyLoss is used for a multi-class classification, so returning the logic for a single class will only predict this class alone.

Hey, I’m having the same error, but I don’t think the problem is quite the same.

I’m doing the Pytorch’s Classifier Tutorial which uses CIFAR10, so it’s a multi-class problem. However, I want to use my own dataset, so I’m not using DataLoader.

My X_train got shape (100, 3, 64, 64), being a tensor full of 64x64 images and my y_train has been one-hot encoded using torch.nn.functional.one_hot, so it got shape (100, 3).

I’ve modified the training loop accordingly, but I keep getting the same IndexError.

Here’s the code for the Neural Network and the training loop:

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(2704, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 3)
        self.ReLU = nn.ReLU()
        self.softmax = nn.Softmax(dim=-1)

    def forward(self, x):
        x = self.conv1(x)
        x = self.ReLU(x)
        x = self.pool(x)
        x = self.conv2(x)
        x = self.ReLU(x)
        x = self.pool(x)
        x = torch.flatten(x)
        x = self.fc1(x)
        x = self.ReLU(x)
        x = self.fc2(x)
        x = self.ReLU(x)
        x = self.fc3(x)
        x = self.softmax(x)
        return x

net = Net().to(device)

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

num_epochs = 10

for epoch in range(num_epochs):
    for i in range(len(X_train)):
        inputs = X_train[i]
        inputs = torch.unsqueeze(inputs, 0) # One sample at time.
        labels = y_train[i]

    optimizer.zero_grad()

    outputs = net(inputs)

    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()
    print(f"Epoch {i} concluded")```