Hi ! I am new to deep learning and have some questions about training networks. Hope someone can help me clear my doubts.
When i train my network i usually stop training when i see that the training loss plot have converged. However this always produces a validation loss plot that decreases and increases after a point. Meaning the network have over fitted right ? Should i stop training at that point when validation loss starts to increases even though training loss has not converged
Thanks in advance