IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Hi,

I use training code of

      model.zero_grad()
      out = model()
      print(y)
      print(out)
      loss = criterion(out, y)
      loss.backward(retain_graph = True)
      optimizer.step()

This code outputs as follows. (y is onehot encoded label)

[0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
tensor([0.0303, 0.0090, 0.0182, 0.2649, 0.2079, 0.0842, 0.4543, 0.0255, 0.0294,
        0.8613], grad_fn=<MulBackward0>)

I meet an error of;

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-110-56f054f4ae5c> in <module>()
     21       print(y)
     22       print(out)
---> 23       loss = criterion(out, y)
     24       loss.backward(retain_graph = True)
     25       optimizer.step()

3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    491             result = self._slow_forward(*input, **kwargs)
    492         else:
--> 493             result = self.forward(*input, **kwargs)
    494         for hook in self._forward_hooks.values():
    495             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
    940     def forward(self, input, target):
    941         return F.cross_entropy(input, target, weight=self.weight,
--> 942                                ignore_index=self.ignore_index, reduction=self.reduction)
    943 
    944 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2054     if size_average is not None or reduce is not None:
   2055         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2056     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2057 
   2058 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in log_softmax(input, dim, _stacklevel, dtype)
   1348         dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
   1349     if dtype is None:
-> 1350         ret = input.log_softmax(dim)
   1351     else:
   1352         ret = input.log_softmax(dim, dtype=dtype)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Does this mean of that loss function expects scalar “out” ?
(NOT yet use softmax for “out” on output-layer)

The shape of out is expected to be [batch_size, nb_classes], while yours seems to be only [batch_size]. If you are dealing with a binary classification use case, you could use nn.BCEWithLogitsLoss (or nn.BCELoss, if you already applied sigmoid on your output).

2 Likes

@ptrblck -san,

Thank you for your replying. I now use nn.BCEWithLogitsLoss instead of nn.BCELoss because bellow site explains the nn.BCELoss is sometime unstable;

http://37ma5ras.blogspot.com/2017/12/loss-function.html

After that I meet error again of different matter;

TypeError                                 Traceback (most recent call last)
<ipython-input-10-8d31dc232b4b> in <module>()
     22       print(out)
     23 
---> 24       loss = criterion(out, y)
     25       loss.backward(retain_graph = True)
     26       optimizer.step()

2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight)
   2158         reduction_enum = _Reduction.get_enum(reduction)
   2159 
-> 2160     if not (target.size() == input.size()):
   2161         raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
   2162 

TypeError: 'int' object is not callable

label is;

[0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]

and output from model is;

tensor([8.9153e-01, 9.3097e-01, 9.7947e-04, 9.9783e-01, 4.7369e-03, 9.7412e-01,
        1.2952e-01, 1.8061e-01, 4.7231e-01, 8.2431e-01],
       grad_fn=<MulBackward0>)

So, both is float, so seems other some arg being set int?

It seems as if one of the passed tensors was passed as an int instead.
Could you check that?
Also, based on your print statement I’m not sure, if label is a list or a tensor.
Anyway, it should be a tensor with the same shape as your model’s output.

@ptrblck -san,

Thank you very much for your advice. I checked again the code, then I found that I use constant of “1” not “1.” in one hot encoding. Now my model is training!

This is a very confusing error message:

Exception has occurred: IndexError
Dimension out of range (expected to be in range of [-1, 0], but got 1)

What does it mean for a dimension to be in the range of [-1,0]?


Anyway, I just have 2 tensors and I want to put them right next to each other in a new tensor:

x_proc1
tensor(-0.9214, grad_fn=<AddBackward0>)
x_proc2
tensor(-1., grad_fn=<AddBackward0>)
x = [x_proc1,x_proc2]
x_proc = torch.stack(x, 1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

The error message points you towards valid indices for a single dimension, which are -1 and 0:

x = torch.randn(1)
print(x[0])
print(x[-1])
print(x[1]) # error

You are passing a list of scalars to torch.stack, which cannot stack them in dim1.
What shape do you expect in x_proc?

hello @ptrblck, i am working on a project related to person re-identification. I am trying to re-implement that code of one of the CVPR paper entitled “ABD-Net: Attentive but Diverse Person Re-Identification”. I trained the ABD-Net architecture with resnet and densenet but when i am trying to train the architecture using shufflenet backbone I face this error. Could you please help me…

=================================================================

File “train.py”, line 147, in main
train(epoch, model, criterion, regularizer, optimizer, trainloader, use_gpu, fixbase=True)
File “train.py”, line 246, in train
loss = criterion(outputs, pids)
File “C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “F:\hayat ullah work\Attention Code\paper 2_new code\torchreid\losses\cross_entropy_loss.py”, line 56, in forward
return self._forward(inputs[1], targets)
File “F:\hayatullah work\Attention Code\paper 2_new code\torchreid\losses\cross_entropy_loss.py”, line 52, in _forward
return sum([self.apply_loss(x, targets) for x in inputs_tuple]) / len(inputs_tuple)
File “F:\hayat ullah work\Attention Code\paper 2_new code\torchreid\losses\cross_entropy_loss.py”, line 52, in
return sum([self.apply_loss(x, targets) for x in inputs_tuple]) / len(inputs_tuple)
File “F:\hayat ullah work\Attention Code\paper 2_new code\torchreid\losses\cross_entropy_loss.py”, line 32, in apply_loss
log_probs = self.logsoftmax(inputs)
File “C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\activation.py”, line 1179, in forward
return F.log_softmax(input, self.dim, _stacklevel=5)
File “C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py”, line 1350, in log_softmax
ret = input.log_softmax(dim)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

It seems your code uses nn.CrossEntropyLoss (a custom implementation?) at one point, which calls into F.log_softmax(input, dim).
The input seems to have a single dimension, while dim is set to 1, which will yield the error posted in my previous code snippet.

Check the activation tensor in your model and make sure it has the expected number of dimensions.
For a multi-class classification, your model output would have two dimensions as [batch_size, nb_classes].

Hey
I am having the same error but I am not using Binary classification, what should I do?
I have 3 classes
Can you please help me out
Here is my code

Classes

class dataset(Dataset):
    def __init__(self):
        self.tf=TfidfVectorizer(max_df=0.99, min_df=0.005)
        self.x=self.tf.fit_transform(corpus).toarray()
        self.y=list(df.review)
        self.x_train,self.x_test,self.y_train,self.y_test=train_test_split(self.x,self.y,test_size=0.2)
        self.token2idx=self.tf.vocabulary_
        self.idx2token = {idx: token for token, idx in self.token2idx.items()}
        print(self.idx2token)
    
    def __getitem__(self,i):
        return self.x_train[i, :], self.y_train[i]
    
    def __len__(self):
        return self.x_train.shape[0]


class classifier(nn.Module):
    def __init__(self,vocab_size,hidden1,hidden2):
        super(classifier,self).__init__()
        self.fc1=nn.Linear(vocab_size,hidden1)
        self.fc2=nn.Linear(hidden1,hidden2)
        self.fc3=nn.Linear(hidden2,1)
    def forward(self,inputs):
        x=F.relu(self.fc1(inputs.squeeze(1).float()))
        x=F.relu(self.fc2(x))
        return self.fc3(x)

Training Loop

epochs=10
total=0
model.train()
for epoch in tqdm(range(epochs)):
    progress_bar=tqdm_notebook(train_loader,leave=False)
    losses=[]
    correct=0
    for inputs,target in progress_bar:
        model.zero_grad()
        output=model(inputs)
        print(output.squeeze().shape)
        print(target.shape)
        loss=criterion(output.squeeze(),target.float())
        loss.backward()
        nn.utils.clip_grad_norm_(model.parameters(), 3)
        optim.step()
        correct += (output == target).float().sum()
        progress_bar.set_description(f'Loss: {loss.item():.3f}')
        losses.append(loss.item())
        total += 1
    epoch_loss = sum(losses) / total
    train_losses.append(epoch_loss)   
    tqdm.write(f'Epoch #{epoch + 1}\tTrain Loss: {epoch_loss:.3f}\tAccuracy: {correct/output.shape[0]}')

Error

IndexError                                Traceback (most recent call last)
<ipython-input-78-6b86c97bcabf> in <module>
     14         print(output.squeeze().shape)
     15         print(target.shape)
---> 16         loss=criterion(output.squeeze(),target.float())
     17         loss.backward()
     18         nn.utils.clip_grad_norm_(model.parameters(), 3)

~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

~\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
    930     def forward(self, input, target):
    931         return F.cross_entropy(input, target, weight=self.weight,
--> 932                                ignore_index=self.ignore_index, reduction=self.reduction)
    933 
    934 

~\Anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2315     if size_average is not None or reduce is not None:
   2316         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2317     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2318 
   2319 

~\Anaconda3\lib\site-packages\torch\nn\functional.py in log_softmax(input, dim, _stacklevel, dtype)
   1533         dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
   1534     if dtype is None:
-> 1535         ret = input.log_softmax(dim)
   1536     else:
   1537         ret = input.log_softmax(dim, dtype=dtype)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Remove the squeeze() operation on the output tensor, as it’ll remove the class dimension.
Also note that nn.CrossEntropyLoss is used for a multi-class classification, so returning the logic for a single class will only predict this class alone.

Hey, I’m having the same error, but I don’t think the problem is quite the same.

I’m doing the Pytorch’s Classifier Tutorial which uses CIFAR10, so it’s a multi-class problem. However, I want to use my own dataset, so I’m not using DataLoader.

My X_train got shape (100, 3, 64, 64), being a tensor full of 64x64 images and my y_train has been one-hot encoded using torch.nn.functional.one_hot, so it got shape (100, 3).

I’ve modified the training loop accordingly, but I keep getting the same IndexError.

Here’s the code for the Neural Network and the training loop:

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(2704, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 3)
        self.ReLU = nn.ReLU()
        self.softmax = nn.Softmax(dim=-1)

    def forward(self, x):
        x = self.conv1(x)
        x = self.ReLU(x)
        x = self.pool(x)
        x = self.conv2(x)
        x = self.ReLU(x)
        x = self.pool(x)
        x = torch.flatten(x)
        x = self.fc1(x)
        x = self.ReLU(x)
        x = self.fc2(x)
        x = self.ReLU(x)
        x = self.fc3(x)
        x = self.softmax(x)
        return x

net = Net().to(device)

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

num_epochs = 10

for epoch in range(num_epochs):
    for i in range(len(X_train)):
        inputs = X_train[i]
        inputs = torch.unsqueeze(inputs, 0) # One sample at time.
        labels = y_train[i]

    optimizer.zero_grad()

    outputs = net(inputs)

    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()
    print(f"Epoch {i} concluded")```

hi sir help me resolve this issue

def forward(self, x):
print(“Input Image Shape:”, x.shape)

    sources = self.basenet(x)
    print(sources)
    y = torch.cat([sources[0], sources[1]], dim=1)
    y = self.upconv1(y)

    y = F.interpolate(y, size=sources[2].size()[2:], mode='bilinear', align_corners=False)
    y = torch.cat([y, sources[2]], dim=1)
    y = self.upconv2(y)

    y = F.interpolate(y, size=sources[3].size()[2:], mode='bilinear', align_corners=False)
    y = torch.cat([y, sources[3]], dim=1)
    y = self.upconv3(y)

    y = F.interpolate(y, size=sources[4].size()[2:], mode='bilinear', align_corners=False)
    y = torch.cat([y, sources[4]], dim=1)
    feature = self.upconv4(y)

    line_seg_output = self.line_seg(feature)
    y = self.conv_cls(feature)

    return y.permute(0, 2, 3, 1), feature, line_seg_output

Training Loop remains unchanged

def train_model(model, train_loader, criterion, optimizer, num_epochs):
model.train()
for epoch in range(num_epochs):
running_loss = 0.0
for images, labels in train_loader:
if labels is None: # Skip if no labels
continue
images = images.to(device)
labels = labels.to(device)

        optimizer.zero_grad()
        try:
            outputs, features, line_seg = model(images)
        except IndexError as e:
            print(f"IndexError during model forward: {e}")  # Print the error
            continue  # Skip this batch if an error occurs

        # Compute the loss using the predicted and ground truth boxes
        loss = compute_loss(outputs, labels)
        running_loss += loss.item()

        loss.backward()
        optimizer.step()

        # Calculate and print scores as needed
        print(f"Epoch [{epoch + 1}/{num_epochs}], Loss: {loss.item():.4f}")

error:
/opt/conda/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter ‘pretrained’ is deprecated since 0.13 and may be removed in the future, please use ‘weights’ instead.
warnings.warn(
/opt/conda/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for ‘weights’ are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=None.
warnings.warn(msg)
Image shape: torch.Size([8, 3, 768, 768]), Label shape: torch.Size([8, 1, 768, 768])
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.4044, -0.7361, 0.4949, …, 0.4238, 0.3935, 0.0955],
[-0.5368, -0.4963, -0.2644, …, -0.5811, 0.1188, 0.2896],
[-0.1285, -0.0296, -0.0933, …, 0.7116, 0.3956, -0.4125],
…,
[-0.4254, -0.3196, -0.4572, …, 0.2639, 0.1947, -0.2646],
[-0.7573, -0.5146, 0.2086, …, -0.2370, -0.3366, 0.1840],
[-0.5686, -0.1267, -0.7767, …, -1.0787, 0.7090, -0.0936]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.5051, 0.4548, -1.0876, …, -0.6125, 0.5000, 0.3218],
[-0.7221, -0.2675, 0.2127, …, -0.7657, -0.5001, 0.6136],
[-0.2613, -0.0772, -0.0573, …, -0.4512, 0.3127, 0.0386],
…,
[-0.5104, -0.4885, 1.0950, …, -0.0668, 0.1706, -0.1397],
[-0.3723, -0.2904, 0.6072, …, -0.2560, 1.5720, 0.7153],
[-0.2120, -0.2270, -0.2689, …, -0.3529, -0.6078, -0.0600]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.4927, -0.8589, 0.4422, …, -0.0046, 0.8333, -0.0590],
[-0.4326, -0.3032, -0.8660, …, 0.4096, 0.3580, -0.3268],
[-0.9395, -0.1388, -1.0118, …, 0.7148, 0.7161, 0.8023],
…,
[-0.4545, -0.0678, 0.5608, …, -0.2418, 0.5576, -0.1032],
[-0.3714, -0.1360, -0.1656, …, 0.4705, -0.3178, -0.0569],
[-0.5408, -0.4968, 0.2710, …, 0.3703, -0.0891, -0.0994]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.7705, 0.2544, -0.3755, …, 0.9964, -0.5855, -0.5263],
[-0.7551, -0.3049, -0.2960, …, -0.2746, 0.2276, -0.3775],
[-0.1324, -0.3960, 0.2080, …, -0.2031, 0.5912, -0.0118],
…,
[-0.2297, 0.3574, 0.3920, …, -0.0392, -0.0561, 0.3127],
[-0.3085, -0.4329, 0.0279, …, -0.2180, 0.2790, 0.2043],
[-0.0141, -0.1604, 0.5282, …, -0.2206, -0.2820, 0.1143]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.2916, 0.1856, 0.1657, …, 0.7204, -1.3506, -0.4101],
[-0.7241, -0.7309, -0.0612, …, -0.0727, -0.8574, -0.1652],
[-0.6705, 0.3109, -0.1507, …, -0.0404, 0.1307, 0.0265],
…,
[-0.4323, 0.5283, 1.0927, …, 0.8221, -0.1007, 0.0906],
[-0.0021, -0.3637, 0.2326, …, -0.1090, -0.0212, -0.5486],
[-0.8483, -1.1329, 0.2535, …, 0.0917, -0.7793, -0.0521]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-4.7047e-01, -6.6858e-01, 7.5513e-02, …, 1.2260e-01,
-2.8948e-01, -3.8970e-01],
[-5.6624e-01, 2.8444e-01, -1.5486e-01, …, 4.7320e-01,
9.0027e-02, -1.7045e-01],
[-1.4272e+00, -4.1946e-01, -5.6649e-03, …, -5.1039e-01,
1.2925e-01, 1.3920e+00],
…,
[ 2.2298e-01, -5.0854e-01, -6.4747e-01, …, 5.1904e-01,
2.9527e-03, 8.7456e-02],
[-1.1401e+00, -3.0086e-01, 1.8337e-04, …, -1.5315e-01,
8.6869e-02, 7.8039e-01],
[-3.7493e-01, -1.5089e-01, -6.0675e-02, …, -5.8706e-01,
-9.6605e-02, 1.7057e-01]], device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-1.0116, -0.3871, 0.6825, …, -0.5246, 0.5342, -0.2504],
[-0.8601, -0.3406, -0.3390, …, -0.3326, 0.0538, -0.3166],
[-0.3534, 0.2697, 0.4342, …, 0.0212, 0.0064, -0.1432],
…,
[-0.9143, -0.0699, 0.0283, …, 0.2657, -0.3248, 0.3678],
[-0.5961, 0.0900, 0.1322, …, -0.5949, -0.3412, 0.0080],
[-0.7551, -0.1829, 0.2661, …, -1.4564, 0.4214, 1.0060]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.3171, -0.0934, -0.3014, …, 0.1072, -0.3002, -0.4988],
[-0.3285, -0.1865, 0.0874, …, -0.2370, 0.7730, -0.0217],
[-0.5338, -0.4180, 0.2362, …, -0.0623, 0.1137, -0.0251],
…,
[ 0.0376, -0.0018, 0.0315, …, 0.3485, 0.0924, -0.2976],
[-0.5627, -0.4211, 0.1318, …, -0.0143, 0.1594, 0.2341],
[-0.2093, -0.2138, -0.2229, …, 0.0239, 0.4300, -0.0194]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.7005, -0.1826, -0.0489, …, 1.1186, -0.0479, -0.0387],
[ 0.3556, -0.5244, 0.0947, …, -0.1803, -0.3190, 0.1648],
[ 0.0933, -0.0618, -0.1025, …, -0.3628, 0.8302, 0.2897],
…,
[-0.2228, -0.4557, 0.3163, …, -0.5787, 0.5719, 0.5134],
[-0.3938, -0.6456, 0.1821, …, 0.1120, -0.3319, 0.4883],
[ 0.8098, -0.1184, 0.1037, …, 0.0425, -0.0916, 0.8871]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-2.5394e-01, 3.3302e-01, 2.7499e-01, …, 1.8446e-01,
4.3905e-02, 6.0760e-01],
[ 1.9025e-02, -5.6220e-01, 7.4794e-01, …, -3.3965e-01,
-3.5796e-01, -3.6498e-01],
[-4.2854e-01, -4.9496e-01, 3.0338e-02, …, -2.5926e-02,
-2.1694e-01, 4.5452e-02],
…,
[-5.9085e-01, -6.0046e-01, 1.9118e-01, …, 4.0469e-01,
-1.8249e-01, -9.6811e-04],
[-1.0429e+00, -1.0357e+00, -9.4589e-01, …, 1.1246e-01,
1.2402e+00, -5.7579e-01],
[-6.6551e-01, -7.5512e-02, -3.9969e-02, …, -5.1645e-01,
4.1326e-01, 4.2275e-01]], device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.0219, -0.2822, 0.2265, …, 0.2418, 0.3327, -0.0402],
[ 0.4984, -0.4270, -0.3354, …, -0.4536, -0.3377, 1.3237],
[-0.0295, 0.1220, -0.0419, …, -0.0357, -0.4269, -1.0708],
…,
[-0.5455, -0.8969, 0.8051, …, -0.5904, 0.0815, -0.0459],
[-1.1027, -0.8729, 0.5223, …, -0.0922, -0.2166, -0.1501],
[-1.0847, -0.4494, 0.5132, …, 0.7046, -0.2247, -0.3562]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.9834, -0.1523, -0.5634, …, -0.2549, 0.6047, -0.1690],
[-0.8327, -0.5963, 0.7554, …, -0.2237, -0.1315, 0.5361],
[-0.9891, -0.0625, 0.0458, …, 0.2625, -0.2717, -0.1290],
…,
[ 0.0176, 0.8916, -0.6742, …, 0.1543, -0.2408, 0.6889],
[-0.7625, -0.5367, -0.0291, …, -0.6910, -0.8559, 0.5854],
[-0.1343, -0.3022, -0.0486, …, -0.0933, 0.0162, 0.3627]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.4028, -0.2947, 0.2029, …, 0.2743, 0.7973, 0.3169],
[ 0.3073, -0.6286, 0.4215, …, -0.1263, -0.2018, -0.5197],
[-0.0350, -0.2481, -0.0070, …, -0.3854, -0.9290, 0.3475],
…,
[-0.6253, -0.6782, 0.0271, …, -1.1104, 0.4533, -0.1955],
[-0.5034, -0.2571, 0.4863, …, -0.0632, 0.1380, -0.4659],
[-0.4420, -0.3097, 0.6845, …, -0.4558, 0.1916, 0.0297]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.0558, -0.2581, 0.3960, …, -0.5543, 0.3947, -0.0844],
[-0.8480, -0.0919, 0.0397, …, 0.1919, 0.8283, -0.2525],
[-0.0306, 0.2083, -0.8964, …, -0.1351, -0.2452, 0.9220],
…,
[ 0.0231, -0.8663, -0.1546, …, -0.2662, -0.0064, 0.2277],
[-0.7903, -0.1745, 0.3059, …, -0.1012, 0.2852, -0.0270],
[-0.2761, -0.6288, 0.3423, …, -0.4925, 0.4846, 0.0767]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.1976, 0.0873, 0.1208, …, -0.4134, -0.0158, 0.3904],
[-0.8956, -0.4877, -0.4096, …, -0.5141, 0.4767, -0.0432],
[-0.8259, -0.6461, 0.2409, …, -0.2605, 0.7934, -0.4256],
…,
[-0.5237, -0.3610, -0.2314, …, -0.4781, -0.3054, 0.2912],
[-1.1544, 0.2792, 0.1160, …, 0.7153, 0.1061, -0.1317],
[-0.1698, -1.5121, -0.1612, …, -2.1626, 0.1563, 0.5110]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.1615, 0.2276, 0.4886, …, 0.1020, 0.2273, -0.1539],
[-0.2412, -0.3006, 0.2001, …, 0.3638, -0.0359, -0.0087],
[-0.2238, -0.7382, -0.0396, …, -0.9838, -0.0233, -0.1472],
…,
[-0.6690, -0.1104, 0.6145, …, -0.6130, -0.0607, -0.4502],
[-2.0469, -0.5632, -0.3734, …, -0.4845, 0.1763, 0.1916],
[-1.4422, -1.1986, -1.3813, …, -0.2621, 0.0343, 0.2220]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.0448, -1.6376, 0.1379, …, 0.4373, 0.0301, -0.0613],
[-0.4610, -0.6401, 1.5393, …, -0.6075, -0.0330, -0.0439],
[-0.0821, -0.4017, 0.0478, …, 0.0759, 0.1628, -0.3554],
…,
[-0.5642, -1.4269, 0.5946, …, -0.7070, 0.1197, -0.0120],
[-1.0311, -1.3185, 0.0452, …, -0.7449, 0.1199, -0.0188],
[-0.3368, -0.1408, 0.1072, …, -0.0810, -0.0894, -0.6463]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.2197, 0.2411, -0.1469, …, -0.8167, 0.2496, 0.8450],
[-0.2004, -1.2463, 0.2897, …, -1.0106, -0.5417, 0.4041],
[-0.2316, -0.6143, 0.4262, …, -0.0749, 0.5735, -0.2703],
…,
[-0.1239, -0.2540, -1.0719, …, -0.0623, 0.8425, -0.0434],
[-0.2350, -0.2053, 0.3862, …, -0.2242, -0.0147, 0.2164],
[-0.8216, 0.0029, -0.2980, …, 0.1366, -0.3497, -0.2459]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.4720, -0.5320, -0.6393, …, 0.2549, -0.0469, -0.6430],
[-0.7185, -0.5889, 0.2245, …, -0.6313, -0.1276, -0.0349],
[-0.8334, -0.1365, 0.3456, …, -0.3527, -0.2541, 0.2160],
…,
[-0.8603, -1.0554, -0.0278, …, -0.1088, -0.3672, -0.0084],
[-0.4061, -0.3055, 0.3471, …, -0.2511, -0.3306, 0.2586],
[ 0.0260, -0.8327, 0.0753, …, 0.1098, 0.4122, 0.6105]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.4072, 0.1989, -0.3392, …, -1.1413, -0.2111, 0.6686],
[-0.4020, -0.7501, -0.4417, …, 0.1067, 0.7074, 0.2277],
[-0.3006, -0.3659, 0.0194, …, 0.2743, -0.2389, 0.4064],
…,
[-0.2076, -0.8137, 0.3236, …, -0.9051, -0.9392, 0.1196],
[-0.9679, -0.5704, 0.0104, …, -0.6807, -0.4095, 0.3416],
[ 0.6310, -0.7217, 0.6366, …, 0.2887, 0.6493, -0.0135]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-2.1444, 0.0628, -0.6769, …, -0.5880, 0.0372, -0.1690],
[-0.3171, -0.6100, 0.5677, …, 0.0030, 0.2578, 0.2066],
[ 0.1611, 0.3092, 0.3052, …, -0.0197, 0.7119, -0.0086],
…,
[-0.7171, -0.3048, 0.3036, …, -0.7200, -0.2492, 0.1268],
[-1.5841, -0.2663, -0.3543, …, 0.0045, 0.0261, 0.2538],
[-0.5654, 0.2354, 0.6698, …, -0.3840, 0.4059, 0.3458]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.4970, -0.1914, 0.1478, …, -0.3729, -0.0760, 0.3246],
[-0.3678, -0.1335, 0.4003, …, 0.1183, -0.1140, -0.4412],
[-1.0414, -1.2297, -0.0569, …, -0.3151, -0.2207, 0.6484],
…,
[-0.2243, -0.3675, -0.0299, …, 0.4430, 0.2777, 0.4293],
[-0.8111, -0.3666, -0.0487, …, -0.0184, 0.3660, -0.1664],
[-1.2991, -0.9967, 0.3291, …, -0.2680, -0.1962, -1.1619]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.0073, -0.5435, 0.2887, …, -0.2666, 1.0959, 0.0371],
[ 0.3507, -0.9323, 0.0130, …, -0.4582, -0.7911, 0.7275],
[-0.0533, -1.3238, -0.5086, …, -0.3558, 0.1019, -0.0666],
…,
[-1.6214, -1.3262, 0.6116, …, -0.2499, -0.0758, 0.2185],
[-0.3988, -0.1029, -1.0202, …, -0.5237, -0.2864, -0.0550],
[-0.4768, -0.9614, 1.0066, …, -0.1008, -0.0019, 0.0061]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.7230, -0.7355, -0.0714, …, 0.4580, -0.6531, -0.3106],
[-0.2997, 0.0297, 0.0643, …, -0.1516, -0.3084, -0.1617],
[-0.3106, -0.5021, 0.9567, …, -0.7286, -0.2878, -0.1037],
…,
[-0.0637, -0.6661, 0.6698, …, -0.3324, -1.2225, 0.2206],
[-0.2543, -1.5634, 0.2873, …, 0.8383, -0.2119, -0.8374],
[-0.9134, -0.1937, 0.1659, …, -0.0734, 0.0578, 0.2835]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.1953, -1.4288, -0.1048, …, -1.0633, 0.1971, -0.0966],
[-0.2054, 0.3143, 0.9655, …, 0.7039, 0.9010, 0.1816],
[ 0.9085, -0.5542, 0.0447, …, -1.3253, -0.3901, 1.0410],
…,
[-1.2567, 0.4372, -0.2464, …, -1.4717, -0.5305, 0.3640],
[-0.5386, -0.5662, 0.1684, …, 0.0320, -0.0276, -0.0735],
[-0.1854, -0.6945, 0.4405, …, 0.0195, -0.2243, 0.1980]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.5835, 0.0169, 0.4051, …, -0.1186, -0.0443, 0.0309],
[-0.2035, -2.0616, -0.8630, …, -0.8634, 0.0534, -0.2138],
[-1.0798, -0.9917, 0.1660, …, 0.0104, -0.4219, -0.1121],
…,
[-0.7671, 0.3686, 0.2681, …, 0.0569, 0.2554, 0.5518],
[ 0.1504, -0.3319, 0.4519, …, 0.1674, -0.6697, -0.5605],
[-0.7330, -0.0953, 0.4594, …, 0.2575, -0.0722, -0.2018]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.9859, -0.9661, 0.0659, …, 0.0549, -0.0629, -0.1965],
[-0.3387, -0.6305, 0.2161, …, 0.1656, -0.2008, -0.0517],
[-0.9253, 0.6169, 1.0512, …, -0.4363, -0.7555, 0.4459],
…,
[-0.1380, -0.6725, -0.0032, …, 0.1743, 0.0135, -0.6397],
[ 0.5951, -1.4848, -0.4067, …, 0.3831, -0.1644, 0.3798],
[-0.7877, -0.3774, 0.5544, …, 0.2264, -0.1564, -0.1723]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.9407, -1.3195, -0.8490, …, -0.4550, 1.1185, 0.5689],
[-0.5221, 0.7403, -0.3109, …, 0.3082, -0.3042, 0.7059],
[-0.3315, -0.6010, 0.2430, …, 0.0913, -0.3178, -0.2458],
…,
[-2.0994, 0.0603, -0.5074, …, 0.0628, -0.0067, -0.1519],
[-1.0658, -0.5500, 0.4701, …, 0.1938, 0.1418, -0.2063],
[ 0.3059, -0.6096, 0.0264, …, -0.0584, 0.2346, 1.0164]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[-0.1668, -0.1389, 0.8581, …, 0.0883, -0.1653, -0.4785],
[-0.4338, -0.2656, -0.3831, …, 0.1172, 0.4626, 0.2379],
[-1.2376, -1.0276, 0.1200, …, -0.3176, 0.3104, -0.2307],
…,
[-0.8769, -0.5548, -0.5299, …, -0.0612, -0.4134, 0.2107],
[-0.8506, -0.5894, -0.2871, …, -0.8424, 0.1268, 0.7447],
[-0.0184, -0.4917, 0.3114, …, -0.0679, 0.9686, -0.2967]],
device=‘cuda:0’, grad_fn=)
IndexError during model forward: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Input Image Shape: torch.Size([8, 3, 768, 768])
tensor([[ 0.0552, -0.6800, -2.0780, …, -0.8568, 1.0204, 1.0352],
[-0.7834, -0.5184, -0.3304, …, -0.0708, 0.2428, -0.3531],
[-0.6266, 0.0341, -0.4405, …, 0.0653, 0.4174, -0.1891],