This query seems a little odd because I am printing a multi-class Confusion Matrix and what I am getting is not completely understandable for me. I got the code for Confusion matrix from this helpful forum and I have changed a little bit. I have put the whole confusion matrix into a function and I have fetched the class number from my dataset.
batch_size = 32 # nb_samples
output = torch.randn(batch_size, n_classes) # refer to output after softmax
target = torch.randint(0, n_classes, (batch_size,)) # labels
def confusion_matrix(preds, labels):
preds = torch.argmax(preds, 1)
conf_matrix = torch.zeros(n_classes, n_classes)
for p, t in zip(preds, labels):
conf_matrix[p, t] += 1
print(conf_matrix)
TP = conf_matrix.diag()
for c in range(n_classes):
idx = torch.ones(n_classes).byte()
idx[c] = 0
TN = conf_matrix[idx.nonzero()[:,None], idx.nonzero()].sum()
FP = conf_matrix[c, idx].sum()
FN = conf_matrix[idx, c].sum()
sensitivity = (TP[c] / (TP[c]+FN))
specificity = (TN / (TN+FP))
print('Class {}\nTP {}, TN {}, FP {}, FN {}'.format(
c, TP[c], TN, FP, FN))
print('Sensitivity = {}'.format(sensitivity))
print('Specificity = {}'.format(specificity))
confusion_matrix(output, target)
Now first is this batch size = 32
is the same as this -
dataloader_train = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True, num_workers=2)
?
Second, This is the output that I got from my code, I have two sets, called train(has around 1000 images each class) and val(has around 8 images in each class),
tensor([[3., 2., 0., 3.],
[1., 0., 1., 1.],
[3., 2., 4., 2.],
[5., 2., 1., 2.]])
Class 0
TP 3.0, TN 15.0, FP 5.0, FN 9.0
Sensitivity = 0.25
Specificity = 0.75
Class 1
TP 0.0, TN 23.0, FP 3.0, FN 6.0
Sensitivity = 0.0
Specificity = 0.8846153616905212
Class 2
TP 4.0, TN 19.0, FP 7.0, FN 2.0
Sensitivity = 0.6666666865348816
Specificity = 0.7307692170143127
Class 3
TP 2.0, TN 16.0, FP 8.0, FN 6.0
Sensitivity = 0.25
Specificity = 0.6666666865348816
I know that by this matrix, I can understand that my classifier classified 3 correct first class predictions, 0-second class predictions, and 4 third class predictions and 2 fourth class predictions.
I have studied some site about multiclass Confusion matrix, but I am not sure about the numbers in the confusion matrix and classwise TP, TN, FP, and FN. But I have 1000 images, then why these small numbers? Do I have to multiply 100 to the Sensitivity and Specificity values to get the percentage of those values, right? Thanks for the help, this confusion matrix is really confusing to me.