Loading learning rate scheduler

I’m confused about the order of instantiating and loading a learning rate scheduler object.

In the code below, I would like to resume training from a previous checkpoint. Setting last_epoch to that provided in the checkpoint ensures that the initial learning rates for the parameters are set correctly. However, calling load_state_dict after overwrites last_epoch (which would’ve been incremented by 1 in the instantiation due to the call to step), causing discrepancies in the resumption of the schedule.

Is this a bug or is there a proper way of doing the instantiation and restoration?

lr_scheduler=optim.lr_scheduler.CosineAnnealingLR(optimizer, opt.epochs,
                                                            last_epoch=checkpoint['lr_scheduler']['last_epoch'] if checkpoint else -1)

if checkpoint: lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])