StepLR can't match the rule?

Hi, I’m training my network with Pytorch. And I’ve met a curious problem.
See, I’m going to use StepLR as my lr changing rule, and I expect that lr=0.0002 when epoch < 30 and lr = 0.00002 while 30< epoch < 40(the entire epoch is 40). So I set the function as follow, my optimizer is Adam:

optimizer = optim.Adam(model.parameters(), lr=0.0002)
shceduler = optim.lr_sheduler.StepLR(optimizer, 30, gamma=0.1)

Learning rate is printed as below format:

print("Training: Epoch[{:0>3}/{0>3}] Iteration[{:0>3}/{0>3}] Loss: {:.4f} lr={:.8f}".format(epoch + 1, iterations, i + 1, len(trainLoader), loss_avg, sheduler.get_lr()[0]))

But I’ve got lr=0.000002 as epoch > 30, I’ve check many times and don’t know why should this be happening


Can somebody tell what is the wrong here? pls

Check your question again, I think you can substitue get_last_lr() for get_lr(). get_lr() couldn’t give you the lastest learning rate.

To get the last learning rate computed by the scheduler, please use get_last_lr().


Orignal reply:

Hi Danil, I think you haven’t accurately used StepLR which should be initialized and step(scheduler.step()). Here is a simple usage about StepLR.

1. you need scheduler.step()

import torch
from torch import optim, nn
class NN(nn.Module):
    def __init__(self):
        super(NN, self).__init__()
        self.layer = nn.Linear(10,10)
    def forward(self, x):
        return self.layer(x)
model = NN()
optimizer = optim.Adam(model.parameters(), lr=0.0002)
scheduler = optim.lr_scheduler.StepLR(optimizer, 30, gamma=0.1)
for epoch in range(40):
    scheduler.step() # let scheduler's timer add 1!
    print("Training: Epoch[{:0>3}] lr={:.8f}".format(epoch + 1, scheduler.get_last_lr()[0]))

2. you may need scheduler.get_last_lr()

To get the last learning rate computed by the scheduler, please use get_last_lr().

I hope this reply is helpful for you.

Oh. you mean that:

  • FIsrt, I should change get_lr() to get_last_lr()?
  • Second, I should put the sheduler.step() ahead of optimizer.step()?

Emm I’ve put the sheduler.step() after finishing each epoch.
What I’ve written is as below

for epoch in range(40):
    print(epoch, 'lr={:6f}'.format(scheduler.get_lr()[0])
    for i, image in enumerate(trainLoader):
        ...
        optimizer.step()
        if i % 200 == 0 and i !=0:
            print("Training: Epoch[{:0>3}/{0>3}] Iteration[{:0>3}/{0>3}] Loss: {:.4f} lr={:.8f}".format(epoch + 1, iterations, i + 1, len(trainLoader), loss_avg, sheduler.get_lr()[0]))
           ...
    sheduler.step()

And I should modify it as this?

for epoch in range(40):
    sheduler.step()
    print(epoch, 'lr={:6f}'.format(scheduler.get_last_lr()[0])
    for i, image in enumerate(trainLoader):
        ...
        optimizer.step()
        if i % 200 == 0 and i !=0:
            print("Training: Epoch[{:0>3}/{0>3}] Iteration[{:0>3}/{0>3}] Loss: {:.4f} lr={:.8f}".format(epoch + 1, iterations, i + 1, len(trainLoader), loss_avg, sheduler.get_last_lr()[0]))

Answer of the first is get_last_lr().
Answer of the second is that both are right.

Got it! Thanks, I’ll try it. :saluting_face: