I want to implement `NoamDecay`

in PyTorch.

I see that the official LR scheduler contains two methods: `get_lr`

and `_get_closed_form_lr`

, such as LinearLR. What is the difference of them?

This is my code. It does NOT work. I guess because of `_get_closed_form_lr`

or `self.last_epoch`

?

```
class NoamLR(_LRScheduler):
"""Implements the Noam Learning rate schedule. This corresponds to increasing the learning rate
linearly for the first ``warmup_steps`` training steps, and decreasing it thereafter proportionally
to the inverse square root of the step number.
Args:
optimizer (Optimizer): Wrapped optimizer.
warmup_steps (int): The number of steps to linearly increase the learning rate. Default: 0.
last_epoch (int): The index of the last epoch. Default: -1.
verbose (bool): If ``True``, prints a message to stdout for each update. Default: ``False``.
"""
def __init__(self, optimizer, warmup_steps=0, last_epoch=-1, verbose=False):
self.warmup_steps = warmup_steps
super(NoamLR, self).__init__(optimizer, last_epoch, verbose)
def get_lr(self):
"""Compute learning rate using the scheduler.
Returns:
list: The learning rate list of each parameter group.
"""
if self.last_epoch == 0:
return [group["lr"] for group in self.optimizer.param_groups]
return [group["lr"] * (self.warmup_steps ** 0.5) *
min(self.last_epoch ** -0.5, self.last_epoch * (self.warmup_steps ** -1.5))
for group in self.optimizer.param_groups]
```