LSTM training process always jumps

I try to classify sequential images using DCNN+LSTM. I first fine-tune the Resnet to extract features from images and do normalization for these features. I then train a one-layer LSTM using the normalized features for sequential images classification. However, the LSTM training process always jumps. I list several epoch as below:

Epoch 0: Train Loss: 0.0106 Acc: 0.6094, Vald Loss: 0.0103 Acc: 0.7308
Epoch 1: Train Loss: 0.0024 Acc: 0.6562, Vald Loss: 0.0109 Acc: 0.7308
Epoch 2: Train Loss: 0.0036 Acc: 0.9062, Vald Loss: 0.0103 Acc: 0.7692
Epoch 3: Train Loss: 0.0021 Acc: 0.6406, Vald Loss: 0.0101 Acc: 0.7885
Epoch 4: Train Loss: 0.0010 Acc: 0.9844, Vald Loss: 0.0100 Acc: 0.7885
Epoch 5: Train Loss: 0.0013 Acc: 0.6875, Vald Loss: 0.0100 Acc: 0.7885
Epoch 6: Train Loss: 0.0010 Acc: 1.0000, Vald Loss: 0.0100 Acc: 0.7885
Epoch 7: Train Loss: 0.0008 Acc: 0.6719, Vald Loss: 0.0100 Acc: 0.7885
Epoch 8: Train Loss: 0.0005 Acc: 1.0000, Vald Loss: 0.0100 Acc: 0.7885
Epoch 9: Train Loss: 0.0013 Acc: 0.6719, Vald Loss: 0.0100 Acc: 0.7885
Epoch 10: Train Loss: 0.0009 Acc: 1.0000, Vald Loss: 0.0100 Acc: 0.7885
Epoch 11: Train Loss: 0.0005 Acc: 0.6875, Vald Loss: 0.0100 Acc: 0.7885
Epoch 12: Train Loss: 0.0005 Acc: 1.0000, Vald Loss: 0.0100 Acc: 0.7885
Epoch 13: Train Loss: 0.0010 Acc: 0.6875, Vald Loss: 0.0100 Acc: 0.7885

Could anyone help me explain this phenomenon and give me some suggestions?