How to change learning rate with Libtorch 1.5?

The old way
trainer.options.learning_rate(new_lr_rate)
does not work anymore.

1 Like

The function name changed from learning_rate to lr ( https://github.com/pytorch/pytorch/releases )

Hi,
But trainer (optimizer) does not have options anymore.

I think you have to use the optimizer constructor taking options as argument, that’s how I do it.
I don’t know if you can change the options after the optimizer is constructed, maybe someone knows ?

Hi,

Did you take a look into param_group option lr ?
I have to implement a lr scheduler tomorrow, I will tell you.

Not so obvious because lr is not part of an options base class. I looked at the source code and here is what I’ve done:

template <
    typename Optimizer = torch::optim::Adam,
    typename OptimizerOptions = torch::optim::AdamOptions>
inline auto decay(
    Optimizer &optimizer,
    double rate)
    -> void
{
    for (auto &group : optimizer.param_groups())
    {
        for (auto &param : group.params())
        {
            if (!param.grad().defined())
                continue;

            auto &options = static_cast<OptimizerOptions &>(group.options());
            options.lr(options.lr() * (1.0 - rate));
        }
    }
}

I declared a spec for every of my optimizers. It works but I am not happy with this.

Pascal

2 Likes

Thank you. Yes, it appears that the trick is:
static_cast<OptimizerOptions &>(group.options()).lr(new_lr_rate);
Not obvious :slightly_smiling_face:

2 Likes

I was wrong, I changed to:

    for (auto &group : optimizer.param_groups())
    {
        if(group.has_options())
        {
            auto &options = static_cast<OptimizerOptions &>(group.options());
            options.lr(options.lr() * (1.0 - rate));
        }
    }

Pascal