Guys, I am making a classifier using ResNet and I want to get the Sensitivity and specificity of the particular dataset. Right now I have accuracy, Train loss, and test loss. I have already studied from Wikipedia and YouTube, about True positive/negative, false negative/positive and know the formula. But I do not know how to implement the formula, like how to get true positive and false negatives. Here is my code -
Sensitivity and Specificity are usually defined for a binary classification problem.
Based on your code it looks like you are dealing with 4 classes.
In that case, you could apply a one vs. all approach, i.e. calculate the sensitivity and specificity for each class. For class0 this would be:
TP of class0 are all class0 samples classified asclass0.
TN of class0 are all non-class0 samples classified as non-class0.
FP of class0 are all non-class0 samples classified as class0.
FN of class0 are all class0 samples not classified as class0.
@MariosOreo Thanks for the catch! I’ve fixed it in my post.
@Deb_Prakash_Chatterj You could count it manually of create a confusion matrix first.
Based on the confusion matrix you could then calculate the stats.
Here is a small example. I tried to validate the results, but you should definitely have another look at it:
nb_samples = 20
nb_classes = 4
output = torch.randn(nb_samples, nb_classes)
pred = torch.argmax(output, 1)
target = torch.randint(0, nb_classes, (nb_samples,))
conf_matrix = torch.zeros(nb_classes, nb_classes)
for t, p in zip(target, pred):
conf_matrix[t, p] += 1
print('Confusion matrix\n', conf_matrix)
TP = conf_matrix.diag()
for c in range(nb_classes):
idx = torch.ones(nb_classes).byte()
idx[c] = 0
# all non-class samples classified as non-class
TN = conf_matrix[idx.nonzero()[:, None], idx.nonzero()].sum() #conf_matrix[idx[:, None], idx].sum() - conf_matrix[idx, c].sum()
# all non-class samples classified as class
FP = conf_matrix[idx, c].sum()
# all class samples not classified as class
FN = conf_matrix[c, idx].sum()
print('Class {}\nTP {}, TN {}, FP {}, FN {}'.format(
c, TP[c], TN, FP, FN))
Yeah, it worked, but can you please explain this, like what is nb_samples and this -
conf_matrix = torch.zeros(nb_classes, nb_classes)
for t, p in zip(target, pred):
conf_matrix[t, p] += 1
print('Confusion matrix\n', conf_matrix)
TP = conf_matrix.diag()
for c in range(nb_classes):
idx = torch.ones(nb_classes).byte()
idx[c] = 0
# all non-class samples classified as non-class
TN = conf_matrix[idx.nonzero()[:, None], idx.nonzero()].sum() #conf_matrix[idx[:, None], idx].sum() - conf_matrix[idx, c].sum()
# all non-class samples classified as class
FP = conf_matrix[idx, c].sum()
# all class samples not classified as class
FN = conf_matrix[c, idx].sum()
nb_samples defines the number of samples we are dealing with.
In your code it would be the batch size, if you apply this code in your training loop, or the length of your dataset, if you collect all predictions and targets during training.
The code first creates a confusion matrix and uses it to compute the TP, TN, FP and FN stats.
Have a look at the Wikipedia info about Confusion Matrix for more information.