Hi everyone,
I read the paper about LR range test
And the one about OneCyclePolicy
I copied this implementation of the LR range test
I’ve also documented myself about de OneCyclePolicy
https://towardsdatascience.com/finding-good-learning-rate-and-the-one-cycle-policy-7159fe1db5d6
It seems to be something many people wonder:
https://mc.ai/super-convergence-with-just-pytorch/
Suppose I have implemented correctly a LR Test
- How is the correct orchestation between the optimizer and the one cycle scheduler? Because, the optimizer has the param
lr
, and the schedulermax_lr
? - Can somebody provide me some example?, since I’ve been looking for it with no success
Thanks