One cycle policy

Hi,

I am using OneCycleLR scheduler with a pct_start of 1.0 (i.e. go from a small learning rate up to a maximum learning rate and then stop), but I get an error at the last training iteration:
computed_lr = self.anneal_func(group['max_lr'], group['min_lr'], down_step_num / self.step_size_down)
ZeroDivisionError: float division by zero

On the other hand, if I use a pct_start of 0.0 (i.e. go from a large learning rate down to a minimum learning rate), the scheduler works as expected. Is this a missed edge case, or am I using the scheduler improperly?

Thank you!

How did you define final_div_factor, which should be used for the minimal learning rate?
Could this value be too high and you might thus run into a zero division?

@Neofytos What is the max_momentum set at? I am able to reproduce the same ZeroDivisionError when pct_start=1.0 and max_momentum=1.0. When I change max_momentum=0.99 (or anything less than 1.0), there is no ZeroDivisionError.