Hi!. I’m trying to come up with an alternative metric to measure the robustness of my classifier apart from the typical ones as F1, accuracy, recall etc.
The dataset is always the same and the model under evaluation is resnet-18. I want to perform a grid search of different learning rates, optimizers etc and check which configuration gives me the most robust classification. The last layer of the resnet-18 is a nn.linear with two outputs. I was thinking if the cumulative sum of each of the nn.linear outputs it is a good way to represent the classification robustness. In other words, imagine that the nn.linear outputs represent dog and cat prediction values. If I sum all the dog values and all the cat values is the same as saying the higher the dog sum value the strongest my classifier into detecting less dog false positives? I thought this would be interesting if for example you have a binary classification of tumor/not tumor you do not care if you are a bit lax onto allowing tumor false positives but you would not want to predict “not tumor” false positives. I know there are metrics which can more or less measure the robustness but Do you think this method is useful?