Simple VGG16 on MNIST (10 classes)

I have this notebook, where there is a simple VGG16 used to do classification on MNIST:

These are the results of a training of 10 epochs:

Epoch 1: TrL=0.3749, TrA=0.1150, VL=0.1300, VA=0.1121, TeL=0.1198, TeA=0.1120,
Epoch 2: TrL=0.0821, TrA=0.1123, VL=0.0889, VA=0.1115, TeL=0.0556, TeA=0.1131,
Epoch 3: TrL=0.0577, TrA=0.1122, VL=0.0677, VA=0.1130, TeL=0.0566, TeA=0.1137,
Epoch 4: TrL=0.0428, TrA=0.1122, VL=0.0648, VA=0.1130, TeL=0.0461, TeA=0.1139,
Epoch 5: TrL=0.0343, TrA=0.1122, VL=0.0611, VA=0.1130, TeL=0.0412, TeA=0.1138,
Epoch 6: TrL=0.0302, TrA=0.1120, VL=0.0581, VA=0.1128, TeL=0.0401, TeA=0.1144,
Epoch 7: TrL=0.0257, TrA=0.1122, VL=0.0474, VA=0.1129, TeL=0.0384, TeA=0.1138,
Epoch 8: TrL=0.0224, TrA=0.1122, VL=0.0465, VA=0.1124, TeL=0.0394, TeA=0.1136,
Epoch 9: TrL=0.0179, TrA=0.1122, VL=0.0460, VA=0.1130, TeL=0.0374, TeA=0.1138,
Epoch 10: TrL=0.0162, TrA=0.1122, VL=0.0435, VA=0.1126, TeL=0.0348, TeA=0.1136,

How is it possible that accuracy is so low? Do you think there is an error in the training function when I define the accuracy?

I think there is an error in the accuracy calculation, as the equality comparison is done twice.

                    pred_labels = (pred_label == labels).float()

                    batch_accuracy = (pred_labels == labels).sum().item()/input.size(0)

should probably be

                    pred_labels = (pred_label == labels).float()

                    batch_accuracy = pred_labels.sum().item()/input.size(0)

The previous code will compare the labels with the already compared result, so if most of the predictions are correct (the result is 1), you will get a low accuracy value reported as the second comparison will effectively count how many 1s there are in the current batch.

1 Like

Thank you, it’s correct