The old way
trainer.options.learning_rate(new_lr_rate)
does not work anymore.
1 Like
Hi,
But trainer (optimizer) does not have options anymore.
I think you have to use the optimizer constructor taking options as argument, that’s how I do it.
I don’t know if you can change the options after the optimizer is constructed, maybe someone knows ?
Hi,
Did you take a look into param_group option lr ?
I have to implement a lr scheduler tomorrow, I will tell you.
Not so obvious because lr is not part of an options base class. I looked at the source code and here is what I’ve done:
template <
typename Optimizer = torch::optim::Adam,
typename OptimizerOptions = torch::optim::AdamOptions>
inline auto decay(
Optimizer &optimizer,
double rate)
-> void
{
for (auto &group : optimizer.param_groups())
{
for (auto ¶m : group.params())
{
if (!param.grad().defined())
continue;
auto &options = static_cast<OptimizerOptions &>(group.options());
options.lr(options.lr() * (1.0 - rate));
}
}
}
I declared a spec for every of my optimizers. It works but I am not happy with this.
Pascal
2 Likes
Thank you. Yes, it appears that the trick is:
static_cast<OptimizerOptions &>(group.options()).lr(new_lr_rate);
Not obvious
2 Likes
I was wrong, I changed to:
for (auto &group : optimizer.param_groups())
{
if(group.has_options())
{
auto &options = static_cast<OptimizerOptions &>(group.options());
options.lr(options.lr() * (1.0 - rate));
}
}
Pascal