Validation acuraccy decrease interpretation


I am working on an image classification problem with CNN. Specifically, I am using a pre-trained VGG16 (ImageNet) with a relatively small dataset (1400 images). After about 500 iterations, it seems that the model converges correctly.

The problem is that, although the accuracy in train increases correctly, the accuracy of the validation set decreases almost from the beginning of the process.

You can see in the images below different independent configurations for train/test, but all with similar behavior.

graph increasing (model over train set) | graph decreasing (model over validation set)

Looking at the graphs, I would like to know if it is possible to interpret an overfitting on the train data. Also, Why is the model performing better on the validation set with the pre-trained network, than after the train process?

Thanks for your help!