Hello everyone!
I am seeing some unexpected behaviour from the ReduceLROnPlateau learning rate scheduler. I have posted a code sample below. I want the learning rate to decrease after the loss doesn’t decrease for consecutive 5 epochs but the learning rate doesn’t change when I use the scheduler.
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(7 * 7 * 64, 1024)
self.fc2 = nn.Linear(1024, 10)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
net = Net()
optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, net.parameters()), lr=0.001)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience = 5, verbose = True, min_lr = 0.000001)
scheduler.step(100)
scheduler.step(10)
scheduler.step(11)
scheduler.step(12)
scheduler.step(13)
scheduler.step(14)
scheduler.step(15)
Thanks for your help in advance!