I am training a model with an unsupervised loss function:
For validation, I am printing the same loss function for every epoch:
Although the general trend of my validation curve seems to go down, I am wondering why the validation loss is so unsmooth and very different from epoch to epoch. Could this indicate any kind of overfitting?
By the way, my results are looking fine (except for “bad” epochs) but I am looking for ways to improve them even further.
I am already using weight decay for regularization.
Thanks in advance!