Problem in getting accuracy of different classes in image classification

I am doing an assignment related to image classification.My dataset has 5 different classes(labels).I have to calculate accuracy of each class(labels) separately.But my problem is that whenever I tested my CNN model I obtained 100% accuracy for one label and accuracy of other labels are 0%. I could not make sense where is my implementation problem.
Here is the code that i used for calculating accuracy of class labels

 classes = ['1', '2', '3', '4', '5']

    class_correct = list(0. for i in range(5))
    class_total = list(0. for i in range(5))

    net.eval()  # prep model for evaluation

    for batch in test_loader:
        data, target = batch['image'], batch['grade']

        if len( != BS:
        # # forward pass: compute predicted outputs by passing inputs to the model
        output = net(data)
        # convert output probabilities to predicted class

        _, pred = torch.max(output, 1)
        # # compare predictions to true label
        correct = np.squeeze(pred.eq(
        # # calculate test accuracy for each object class
        for i in range(BS):
            label =[i]
            class_correct[label] += correct[i].item()
            class_total[label] += 1

    for i in range(5):
        print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
            classes[i], 100 * class_correct[i] / class_total[i],
            np.sum(class_correct[i]), np.sum(class_total[i])))

This is my results that I obtained

Accuracy of the network (overall): 46 %
    Test Accuracy of     1: 100% (652/652)
    Test Accuracy of     2:  0% ( 0/279)
    Test Accuracy of     3:  0% ( 0/459)
    Test Accuracy of     4:  0% ( 0/263)
    Test Accuracy of     5:  0% ( 0/47)

The calculation looks alright.
Your model might just overfit to class0, thus predicting every sample as this class.
Check the prediction distribution using


to see, whether your model predicts other classes.

Whenever I called


I got the below results

(tensor([0]), tensor([50]))

Even though your class distribution is not very imbalanced, it seems your model is still overfitting to this class.
You could e.g. add the weight parameter to your criterion and try to counter this effect.