A strange phenomenon i met in training

Hi, I met a strange phenomenon while i trained my network. At the beginning of the training, everything seemed normal, training loss was decreasing and the evaluation loss was also decreasing. However, after the epoch 9, the evaluation loss substantially increased from 1.88 to 3.77 and the accuracy in evaluation decreased from ~48% to ~10% but the training loss was still decreasing from 3.86 to 3.81. Have you guys ever meet this ? what could i try to rescue my network?

Edit: i have exploited weighs decay and dropout in my network. And what i need to supplement is that the evalution loss becomes normal in the epoch 11(about 1.63). it seems the abnormal evaluation loss only occur in the epoch 10. it is quiet strange because if it is overfitting in my model then my evalution loss should increase after epoch 9, at lease, should not decrease so largely.