I selected few classes from Imagenet Dataset. And I finetuned the vgg16_batchNormalized model, by keeping one fully connected layer instead of three fully Connected Layer.
I am checking the test accuracy after every epoch, because I want to save the best model for later use. I am considering the test accuracy to save the best model, i.e, if the model having better test accuracy is the model that attained better minima of loss function and weights are updated accordingly.
Now during first few epochs the Test Accuracy is high as compared to the Train accuracy. I understood that the network has not learned properly and so the prediction is not correct.
As the training goes on,The Training Accuracy gradually increases and the train loss is also getting decreased, also the Test Accuracy increases. Until after some epochs I see, that the Training keeps getting better but the Test Loss starts increasing and thereby the Test Accuracy is also decreasing. Now, here I understand that now the Network has learned better as the loss of training is also less and thereby trying to get more accurate predictions on Test Dataset.
Is my understanding correct?
I am new to Deep Learning and so have less experience, it would be helpful if someone helps me in this.
Also one fact, I do not understand, if the training gets better, that means the loss is getting reduced and accordingly the network parameters(weights) are updated, why is the network giving less Test Accuracy on Test Dataset .
I can share the codes and output logs if its required.
Any help will be appreciated.