# Multi Class ROC

Hey, I am making a multi-class classifier with 4 classes. Now I have printed Sensitivity and Specificity along with a confusion matrix. Now I want to print the ROC plot of 4 class in the curve. As ROC is binary metric, so it is ‘given class vs rest’, but I want to add all 4 classes in the same plot.
With this code, I have got my probability -

``````output = model.forward(images)
p = torch.nn.functional.softmax(output, dim=1)
``````

If you already have the metrics (TP, FP, TN, FN), you could probably just use scikit-learn’s methods as explained here.

1 Like

Well, I have done as it is saying, but in this -

`````````#Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = [auc](fpr[i], tpr[i])

#Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])```
``````

Do I need to change my fpr and tpr with my FP and TP in -
TP = conf_matrix.diag() and FP = conf_matrix[idx, c].sum()?

Also, I have changed n_class to nb_class = 4
How can I represent this -

``````for i in range(nb_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
``````

I do not have y_test, but I have dataloader_test, and what is y_score?

fpr and tpr are False Positive Rate and True Positive Rate respectively while your metrics are different FP and TP. You will get FPR and TPR from roc_curve() function.

y_test refers to the True Predictions i.e Ground Truth(y_true) and y_score are predictions generated by your model (y_pred).

Hi Ptrblck,

Can we have ROC curve for the target with float numbers?

I try these codes when the Target is 0.9 instead of 1. but the error was that it is not supported for the continuous format

``````    AUC.append(roc_auc_score(TargetWholev3.reshape(-1,1),predictedwhole_v3.reshape(-1,1)),pos_label=max(TargetWholev3))

fpr1, tpr1, thresholds =  metrics.roc_curve(TargetWholev3.reshape(-1,1),predictedwhole_v3.reshape(-1,1), pos_label=max(TargetWholev3))
``````

`metrics.roc_curve` seems to expect integer labels, so you would have to undo the label smoothing.