What are some common reasons that loss may increase towards the end of an epoch?

I’m not necessarily looking to solve this issue as I’m mostly just learning but while training a small model of mine I noticed that loss will start pretty low, then increase, and then drop at the start of the next epoch.

I’ve done some googling but was curious if anyone else has encountered this and if so what the common culprits are.

Something like this:

--- Epoch 4 ---
Epoch 4 | Round 0: Loss = 0.1609647423028946 Validation Loss = 0.16069461405277252
Epoch 4 | Round 100: Loss = 0.7978596687316895 Validation Loss = 0.7975720167160034
Epoch 4 | Round 200: Loss = 0.6700431704521179 Validation Loss = 0.669928789138794
Epoch 4 | Round 300: Loss = 2.1349849700927734 Validation Loss = 2.134230613708496
Epoch 4 | Round 400: Loss = 2.5852344036102295 Validation Loss = 2.5841891765594482
Epoch 4 | Round 500: Loss = 2.864406108856201 Validation Loss = 2.86250376701355
--- Epoch 5 ---
Epoch 5 | Round 0: Loss = 0.09773023426532745 Validation Loss = 0.09760947525501251
Epoch 5 | Round 100: Loss = 0.4646699130535126 Validation Loss = 0.46453770995140076
Epoch 5 | Round 200: Loss = 0.4494265019893646 Validation Loss = 0.44930222630500793
Epoch 5 | Round 300: Loss = 1.3677105903625488 Validation Loss = 1.3674801588058472
Epoch 5 | Round 400: Loss = 2.0765790939331055 Validation Loss = 2.0757954120635986
Epoch 5 | Round 500: Loss = 2.258286952972412 Validation Loss = 2.2565605640411377
--- Epoch 6 ---
Epoch 6 | Round 0: Loss = 0.0712127760052681 Validation Loss = 0.0711481124162674
Epoch 6 | Round 100: Loss = 0.2980029582977295 Validation Loss = 0.2978473901748657
Epoch 6 | Round 200: Loss = 0.2850055694580078 Validation Loss = 0.28492817282676697
Epoch 6 | Round 300: Loss = 0.9372814893722534 Validation Loss = 0.9373258948326111
Epoch 6 | Round 400: Loss = 1.617525339126587 Validation Loss = 1.61699640750885
Epoch 6 | Round 500: Loss = 1.804351806640625 Validation Loss = 1.8027862310409546

Also seeing now in training with a larger dataset that sometimes loss spikes and then falls off immediately again:

--- Epoch 7 ---
Epoch 7 | Round 0: Loss = 0.0009140036418102682 Validation Loss = 0.0009111818508245051
Epoch 7 | Round 100: Loss = 0.00033618774614296854 Validation Loss = 0.0003360148111823946
Epoch 7 | Round 200: Loss = 0.0008071979973465204 Validation Loss = 0.0008058046805672348
Epoch 7 | Round 300: Loss = 0.0006388478213921189 Validation Loss = 0.0006378054386004806
Epoch 7 | Round 400: Loss = 0.00026883321697823703 Validation Loss = 0.0002683177881408483
Epoch 7 | Round 500: Loss = 0.00016036085435189307 Validation Loss = 0.00016005177167244256
Epoch 7 | Round 600: Loss = 0.00012482452439144254 Validation Loss = 0.00012471845548134297
Epoch 7 | Round 700: Loss = 0.0006077916477806866 Validation Loss = 0.0006070979870855808
Epoch 7 | Round 800: Loss = 0.0002059808321064338 Validation Loss = 0.0002061814011540264
Epoch 7 | Round 900: Loss = 0.0007044092635624111 Validation Loss = 0.0007117609493434429
Epoch 7 | Round 1000: Loss = 6.421970465453342e-05 Validation Loss = 6.438294803956524e-05
Epoch 7 | Round 1100: Loss = 0.00015985441859811544 Validation Loss = 0.00016017867892514914
Epoch 7 | Round 1200: Loss = 0.0011026774300262332 Validation Loss = 0.0011021472746506333
Epoch 7 | Round 1300: Loss = 0.0001646105374675244 Validation Loss = 0.0001643123832764104
Epoch 7 | Round 1400: Loss = 0.00017581842257641256 Validation Loss = 0.00017495454812888056
Epoch 7 | Round 1500: Loss = 9.868131019175053e-05 Validation Loss = 9.85043661785312e-05
Epoch 7 | Round 1600: Loss = 5.0628019380383193e-05 Validation Loss = 5.0486214604461566e-05
Epoch 7 | Round 1700: Loss = 0.00028382311575114727 Validation Loss = 0.0002849879383575171
Epoch 7 | Round 1800: Loss = 0.0003619095077738166 Validation Loss = 0.0003625099780037999
Epoch 7 | Round 1900: Loss = 9.860018326435238e-05 Validation Loss = 9.906368359224871e-05

Are you shuffling your data properly or are you using the same order in every epoch?

I was feeding it in the same order. Thanks!