# Change learning rate in pytorch

(Peter Ham) #1

I’m using adam for optimization. Should I change learning rate using this?

for param_group in optimizer.param_groups:
param_group['lr'] = lr


It seems every time I change the learning rate, the loss increases a lot, and the accuracy goes down at the learning rate transition point. What’s the reason?

#2

You could try to use lr_scheduler for that -> http://pytorch.org/docs/master/optim.html

(Austin) #3

That is the correct way to manually change a learning rate and it’s fine to use it with Adam. As for the reason your loss increases when you change it. We can’t even guess without knowing how you’re changing the learning rate (increase or decrease), if that’s the training or validation loss/accuracy, and details about the problem you’re solving. The reasons could be anything from “you’re choosing the wrong learning rate” to “Your optimization jumped out of a local minimum”.

It’s likely best to get more intuition as to what’s happening with the optimization on your own if you’re interested.

sgdr paper

(Peter Ham) #4

Thanks. I’m actually decreasing the learning rate by multiplying it with 0.99 every epoch.

(Simon Wang) #5

\sum_i 0.99^i is a convergent sum, you should consider something that converges to 0, but the sum diverges.

(Peter Ham) #6

I don’t understand this, why should the sum diverge?

(Simon Wang) #7