Early stop - should i stop training

This is more of theoretical question.

Consider the following example:
While training my model reaches the following stats:
40% accuracy on validation set and 60% accuracy on training set.
If i continue training the model, the accuracy on the validation slightly improves to 43% while the accuracy on the training set goes up to around 97%.

Should i stop the training on the first option? in other words, should i take the model that with the higher results although its seems like overfitted.

Thanks a lot.

Hy @Gal_Co, I guess you should training over there. In my opinion the first and the second results both are meaning less, because it’s overfitting in both the iteration. That’s the whole concept of early stopping plus overfitted model are of no use. :wink:

IMHO, this awnser really depends on your data and your loss curves (training and validation) and not solely on accuracy. These are the ones that will tell you better whether you will have or not overfitted your data.

Only watching accuracy is not great, since validation loss sometimes do not actually reflex on a great accuracy. For exemple, in imbalanced problems, your model can have an excelent accuracy by only learning the most present class of the dataset and rejecting others. Your val loss would be high, but the accuracy would also be “good”.

At the end of the day, I would choose the model that has the lowest validation loss.