ReduceLROnPlateau keeps reducing lr even when not needed

Hello,

I am currently trying to incorporate a scheduler in my training loop, using the loss itself as validation measurement. I noticed that the scheduler always reduces the learning rate, independantly of the value of the loss.Setting the patience to 20 for example, I get outputs like :

0m 6s (5 0%) 3838.608
0m 8s (10 0%) 68974.180
0m 11s (15 1%) 50144.387
0m 14s (20 1%) 46037.418
Epoch    21: reducing learning rate of group 0 to 5.0000e-03.
0m 16s (25 1%) 43242.031
0m 19s (30 2%) 48685.941
0m 21s (35 2%) 51668.488
0m 24s (40 2%) 48751.176
Epoch    42: reducing learning rate of group 0 to 2.5000e-03.
0m 26s (45 3%) 48641.238
0m 28s (50 3%) 52856.855
0m 31s (55 3%) 55549.047
0m 33s (60 4%) 49968.438
Epoch    63: reducing learning rate of group 0 to 1.2500e-03.
0m 35s (65 4%) 40710.168
0m 38s (70 4%) 40792.227
0m 40s (75 5%) 38300.871
0m 43s (80 5%) 44388.047
Epoch    84: reducing learning rate of group 0 to 6.2500e-04.
0m 45s (85 5%) 76917.484
0m 48s (90 6%) 62983.559
0m 50s (95 6%) 45617.375
0m 53s (100 6%) 39899.105
0m 55s (105 7%) 94773.094
Epoch   105: reducing learning rate of group 0 to 3.1250e-04.

Where at the beginning it makes sense to reduce the LR, but then it also happens when the loss starts decreasing until the LR reaches the minimum allowed value …
I initialized the scheduler like this :

scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=0.5, patience=20, verbose=True)

taking a step with :

scheduler.step(loss)

Did I forgot something ?

1 Like

You didn’t set minimum value…
https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.ReduceLROnPlateau

min_lr (float or list) – A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. Default: 0.