Oscillating in loss and accuracy during fine tuning

Hey guys:

I noticed that in this tutorial:
https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
The training loss and accuracy keeped oscillating. Is this a supposed behavior or we should expect the training loss to keep dropping and training accuracy to keep increasing after some time?

Please let me know what you guys think. :slight_smile:
Thx a lot.