I’m trying to use the ReduceLROnPlateau scheduler but it doesn’t do anything, i.e. not decrease the learning rate after my loss stops decreasing (and actually starts to increase over multiple epochs quite a bit).
Here is the code:
criterion = nn.MSELoss()
optimizer = optim.Adam(self.model.parameters(), lr=lr, weight_decay=weight_decay)
lr_scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, verbose=True)
for epoch in range(epochs):
running_loss = 0.0
for i, data in enumerate(trainloader):
x_batch, s_batch = data
x_batch, s_batch = x_batch.to(self.device), s_batch.to(self.device)
optimizer.zero_grad()
outputs = self.model(x_batch)
loss = criterion(outputs, s_batch)
running_loss += loss.item()
loss.backward()
optimizer.step()
lr_scheduler.step(running_loss)
What am I missing?