Unexpected behaviour of ReduceLROnPlateau learning rate scheduler

Hello everyone!

I am seeing some unexpected behaviour from the ReduceLROnPlateau learning rate scheduler. I have posted a code sample below. I want the learning rate to decrease after the loss doesn’t decrease for consecutive 5 epochs but the learning rate doesn’t change when I use the scheduler.

import torch
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(7 * 7 * 64, 1024)
        self.fc2 = nn.Linear(1024, 10)
        
    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return x

net = Net()

optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, net.parameters()), lr=0.001)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience = 5, verbose = True, min_lr = 0.000001)

scheduler.step(100)
scheduler.step(10)
scheduler.step(11)
scheduler.step(12)
scheduler.step(13)
scheduler.step(14)
scheduler.step(15)

Thanks for your help in advance! :smiley:

The next step will reduce the learning rate, if the loss hasn’t improved.
The patience is used to ignore these iterations and will lower the learning rate in the next step.
From the docs:

Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the 3rd epoch if the loss still hasn’t improved then.