Loss function doesn't decrease too much

Hi everyone,
I am training a Neural Network and my loss function doesn’t decrease too much, Is the Neural Network overtraining? The network is training with 50 epochs and it starts with a lr of 0.001 and decrease to 0.0001, 0.00001 in epochs 15, 40 respectively. I have 31000 images for the training set and around 12000 for validation set. Any suggestions?

Figure_1

Looking at the plot, it seems that the network “converged” after ~100 epochs. It’s nothing unusual, especially if you have smaller datasets like yours. From the plot, it really looks pretty normal to me. You can try different (e.g., adaptive learning rate) optimizers though to minimize fluctuations in the end – if that would help with generalization performance is a different issue though.

1 Like