The relation between accuracy and loss

I know that the lower the validation loss, the higher the validation accuracy can not be guaranteed. However, I do not know if this condition is normal, as shown below.

image

As we can see, after about epoch 25, there is overfitting (validation loss increases largely). I think when there is overfitting, the validation accuracy should lower. However, I found that after about epoch 25, the validation accuracy still increases. This confuses me.

And also, I found that the highest testing accuracy is not necessarily at the point with lowest validition loss or the point with highest validation accuracy when I used the testing set to test. This also confuses me. Totally, I trained 300 epoches. And I found that the testing accuracy of epoch 300 is usually good whether it has the highest validition accuracy or not.

It seems after epoch 25 the model is narrowing down on perfecting the prediction of the training data. The overfitting of the model is after that disregarding any more higher level semantics learned during the first 25 epochs and exchanging it for in detail being able to predict your specific training data, validation is not examples from your training data right?. The generalisation seems to be lost due to too small training data corpus.

I have no clue what happened there after ~125 epochs - did you change training data?

Hi,
Thanks for your reply.

The validation set is not examples from my training data. The validation set is independent.

And I did not change the training data after about 125 epochs.

Look forward to your reply!

Hi,

my training set, validation set, and testing set are all independent.

you might just be observing an instance of this mysterious effect in DL: GitHub - thegregyang/LossUpAccUp: Loss and accuracy go opposite ways...right? :slight_smile:

Hi,

Thanks!
I will check this link.