optim.lr_scheduler.ReduceLROnPlateau minimum learning rate

I am running my CNN model using ReduceLROnPlateau to manage my learning rate and factor=0.1/patience=5. However, after 100 epochs, the learning rate stuck at 1.0000000000000005e-08 and it won’t go any lower. Any idea why?

This behavior is expected as stated in the documentation: “If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.”

You can provide the “eps” value and may observe the change.

Got it, I am using the default eps 1e-8. I will make it to 1e-10 or even smaller then. Thanks!