How to implement adaptive step decay learning rate policy?

In VGG and ResNet, they all decay the learning rate when the validation accuracy stops improving or at the error plateau.
I want to implement this policy.
However, I don’t know how to evaluate the error plateau.
Could anyone give me some advice?

See this: https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py#L163

It’s not part of release yet.

1 Like

Thank you very much!
I am also looking forward to the early stopping function.