IndexError: index 646 is out of bounds for dimension 0 with size 39

I am getting this error -

 output = model.forward(images)
---> 90     conf_matrix = confusion_matrix(output, labels, conf_matrix)
     91     p = torch.nn.functional.softmax(output, dim=1)
     92     prediction = torch.argmax(p, dim=1)

 50     preds = torch.argmax(preds, 1)
     51     for p, t in zip(preds, labels):
---> 52         conf_matrix[p, t] += 1
     53 
     54     #print(conf_matrix)

I have implemented a Transfer Learning with VGG16 and tried to print precision, recall, and conf_matrix. I have printed Vgg16 from models and used it.

model = models.vgg16(pretrained=True)

for param in model.parameters():
    param.requires_grad = True

model.fc = nn.Sequential(nn.Linear(25088, 4096),
                                 nn.ReLU(),
                                 nn.Dropout(0.4),
                                 nn.Linear(4096,4096),
                                 nn.ReLU(),
                                 nn.Dropout(0.4),
                                 nn.Linear(4096,39),
                                 nn.LogSoftmax(dim=1))

I am really not getting where is exactly the problme is with the Conf_matrix

def confusion_matrix(preds, labels, conf_matrix, title='Confusion matrix', cmap=plt.cm.Blues):
    preds = torch.argmax(preds, 1)
    for p, t in zip(preds, labels):
        conf_matrix[p, t] += 1

    #print(conf_matrix)
    #plt.imshow(conf_matrix)
    TP = conf_matrix.diag()
    for c in range(n_classes):
        idx = torch.ones(n_classes).byte()
        idx[c] = 0
        TN = conf_matrix[idx.nonzero()[:,None], idx.nonzero()].sum()
        FP = conf_matrix[c, idx].sum()
        FN = conf_matrix[idx, c].sum()

        Recall = (TP[c] / (TP[c]+FN))
        precision = (TP[c] / (TP[c]+FP))
        f1 = (2 * ((precision * Recall)/(precision + Recall)))

        #print('Class {}\nTP {}, TN {}, FP {}, FN {}'.format(c, TP[c], TN, FP, FN))
        #print('Sensitivity = {}'.format(sensitivity))
        #print('Specificity = {}'.format(specificity))
            
    return conf_matrix

conf_matrix = torch.zeros(n_classes, n_classes)

Lastly, this is my training loop

for images, labels in dataloader_train:
    
    #steps += 1
    images, labels = images.to(device), labels.to(device)
    
    optimizer.zero_grad()
    
    output = model.forward(images)
    conf_matrix = confusion_matrix(output, labels, conf_matrix)
    p = torch.nn.functional.softmax(output, dim=1)
    prediction = torch.argmax(p, dim=1)
    #loss = torch.nn.functional.nll_loss(torch.log(p), y)
    loss = criterion(output, labels)
    loss.backward()
    optimizer.step()
    
    train_loss += loss.item()*images.size(0)

Any help is appriciaed. Thank you.

Could you print the shape of preds and labels as well as their min and max values, please?

here you go @ptrblck ,

print(output.shape)
    print(labels.shape)
    tensor_max_value = torch.max(output)
    print(tensor_max_value)
    tensor_min_value = torch.min(output)
    print(tensor_min_value)
    tensor_max_value1 = torch.max(labels)
    print(tensor_max_value1)
    tensor_min_value1 = torch.min(labels)
    print(tensor_min_value1)

Ans -

torch.Size([16, 1000])
torch.Size([16])
tensor(11.5087, device='cuda:0', grad_fn=<MaxBackward1>)
tensor(-6.5780, device='cuda:0', grad_fn=<MinBackward1>)
tensor(38, device='cuda:0')
tensor(3, device='cuda:0')

I have given the code, to make sure, I am going in the right direction.
Also printed from the training loop.

@ptrblck hi, can you please look into this, I am still confused, that, should the labels need to be 39?

Based on the shape of output it seems you are dealing with 1000 classes, which would also mean that your confusion matrix should be initialized with torch.zeros(1000, 1000).
Is this the case?

No, I am having 39 classes, I am pretty confused.
You can see the colab notebook from here - https://colab.research.google.com/drive/1IEGwDU4c59xQN-4P4Z5UE0F6P_zslpiW
This isn’t the case for ResNet, but for all other cases like DenseNet or VGG, it is happening. Please help @ptrblck

Make sure to replace the last linear layer with a new nn.Linear using out_features = 39.
Different models use different attributes for the last layer(s), e.g. model.fc, model.classifier etc.

Sorry for the late reply, you have saved me @ptrblck , this is the solution. Thanks.