Early Stopping in Deep metric learning

I am training a network using ArcFace loss. My eventual goal is to learn robust embeddings that can differentiate similar and dissimilar images (even for unseen classes).
As this is a verification kind of task, but training ArcFace is a classification task, I am confused about how should I early stop on validation dataset because my validation dataset has unseen classes because validation tasks is of verification. Without early stopping ArcFace loss network overfits on training classification data and gives very poor performance on validation verification task.

Thanks

This seems to give the answer. :wink:
Based on this statement it seems that you can (somehow) quantify that the model is overfitting and gives poor performance on the validation set. Assuming you can measure this overfitting as well as the performance on the training and validation set, you could use this metric also for early stopping.

1 Like

Thanks @ptrblck. You were right, instead of using a traditional sample wise train-test split, I reserved a few set of classes for validation from my training set, and for validation sampled pairs from this validation set and computed the pair-wise matching accuracy.

1 Like