Change learning rate in pytorch

I’m using adam for optimization. Should I change learning rate using this?

for param_group in optimizer.param_groups:
        param_group['lr'] = lr

It seems every time I change the learning rate, the loss increases a lot, and the accuracy goes down at the learning rate transition point. What’s the reason?

6 Likes

You could try to use lr_scheduler for that -> http://pytorch.org/docs/master/optim.html

1 Like

That is the correct way to manually change a learning rate and it’s fine to use it with Adam. As for the reason your loss increases when you change it. We can’t even guess without knowing how you’re changing the learning rate (increase or decrease), if that’s the training or validation loss/accuracy, and details about the problem you’re solving. The reasons could be anything from “you’re choosing the wrong learning rate” to “Your optimization jumped out of a local minimum”.

It’s likely best to get more intuition as to what’s happening with the optimization on your own if you’re interested.

adam paper

sgdr paper

Thanks. I’m actually decreasing the learning rate by multiplying it with 0.99 every epoch.

\sum_i 0.99^i is a convergent sum, you should consider something that converges to 0, but the sum diverges.

I don’t understand this, why should the sum diverge?

Nevermind, I somehow was thinking about convex problems :smiley: My bad.