Consecutive linear interpolations in learning rate schedulers

I’m studying a learning algorithm that has to do with how training samples are chosen in SGD, so this problem I’m going to present is “tangential”.

The thing is that I want to use a learning rate schedule as this:

The user gives an input consisting in two different lists, e.g.:

  • epoch_list_LR : [0, 100, 200, 500] so it’s implicit that I want the training to last 500 epochs.
  • LRs_list: [1, 0.1, 0.01, 0.001]

I would like the length of these lists not to be fixed a priori
The learning rates I’d like to be applied to the optimizer would be linear interpolations between the consecutive points given by the lists, e.g.:

between epochs 100 and 200, the learning rate should decrease linearly between 0.01 and 0.001

The only “solution” I’ve come up with would be to create successive torch schedulers and use the milestone feature there, but that would require to know a priori how many learning rate “segments”
the user wants to apply.

Thank you for your time.

I think writing a custom scheduler would give you the most flexibility and allow you to implement this custom learning rate scheduling.
Something like this might work:

class MyScheduler(torch.optim.lr_scheduler._LRScheduler):
    def __init__(self, optimizer, lr_epochs, lrs, last_epoch=-1, verbose=False):
        self.optimizer = optimizer
        # calculate learning rates
        self.learning_rates = []
        for epoch_start, epoch_end, lr_start, lr_end in zip(lr_epochs[:-1], lr_epochs[1:], lrs[:-1], lrs[1:]):
            self.learning_rates.extend(torch.linspace(start=lr_start, end=lr_end, steps=(epoch_end-epoch_start)))
            
        super(MyScheduler, self).__init__(optimizer, last_epoch, verbose)
        
        
    def get_lr(self):
        if not self._get_lr_called_within_step:
            warnings.warn("To get the last learning rate computed by the scheduler, "
                          "please use `get_last_lr()`.")
        lr = self.learning_rates[self.last_epoch] if len(self.learning_rates)>self.last_epoch else self.learning_rates[-1]
        return [lr]
    

model = nn.Linear(1, 1)
optimizer = torch.optim.Adam(model.parameters(), lr=1.) 
scheduler = MyScheduler(optimizer, lr_epochs=[0, 10, 20, 50], lrs=[1, 0.1, 0.01, 0.001])

for epoch in range(60):
    print('epoch {}, lr {}'.format(epoch, scheduler.get_last_lr()))
    optimizer.step()
    scheduler.step()

Output:

epoch 0, lr [tensor(1.)]
epoch 1, lr [tensor(0.9000)]
epoch 2, lr [tensor(0.8000)]
epoch 3, lr [tensor(0.7000)]
epoch 4, lr [tensor(0.6000)]
epoch 5, lr [tensor(0.5000)]
epoch 6, lr [tensor(0.4000)]
epoch 7, lr [tensor(0.3000)]
epoch 8, lr [tensor(0.2000)]
epoch 9, lr [tensor(0.1000)]
epoch 10, lr [tensor(0.1000)]
epoch 11, lr [tensor(0.0900)]
epoch 12, lr [tensor(0.0800)]
epoch 13, lr [tensor(0.0700)]
epoch 14, lr [tensor(0.0600)]
epoch 15, lr [tensor(0.0500)]
epoch 16, lr [tensor(0.0400)]
epoch 17, lr [tensor(0.0300)]
epoch 18, lr [tensor(0.0200)]
epoch 19, lr [tensor(0.0100)]
epoch 20, lr [tensor(0.0100)]
epoch 21, lr [tensor(0.0097)]
epoch 22, lr [tensor(0.0094)]
epoch 23, lr [tensor(0.0091)]
epoch 24, lr [tensor(0.0088)]
epoch 25, lr [tensor(0.0084)]
epoch 26, lr [tensor(0.0081)]
epoch 27, lr [tensor(0.0078)]
epoch 28, lr [tensor(0.0075)]
epoch 29, lr [tensor(0.0072)]
epoch 30, lr [tensor(0.0069)]
epoch 31, lr [tensor(0.0066)]
epoch 32, lr [tensor(0.0063)]
epoch 33, lr [tensor(0.0060)]
epoch 34, lr [tensor(0.0057)]
epoch 35, lr [tensor(0.0053)]
epoch 36, lr [tensor(0.0050)]
epoch 37, lr [tensor(0.0047)]
epoch 38, lr [tensor(0.0044)]
epoch 39, lr [tensor(0.0041)]
epoch 40, lr [tensor(0.0038)]
epoch 41, lr [tensor(0.0035)]
epoch 42, lr [tensor(0.0032)]
epoch 43, lr [tensor(0.0029)]
epoch 44, lr [tensor(0.0026)]
epoch 45, lr [tensor(0.0022)]
epoch 46, lr [tensor(0.0019)]
epoch 47, lr [tensor(0.0016)]
epoch 48, lr [tensor(0.0013)]
epoch 49, lr [tensor(0.0010)]
epoch 50, lr [tensor(0.0010)]
epoch 51, lr [tensor(0.0010)]
epoch 52, lr [tensor(0.0010)]
epoch 53, lr [tensor(0.0010)]
epoch 54, lr [tensor(0.0010)]
epoch 55, lr [tensor(0.0010)]
epoch 56, lr [tensor(0.0010)]
epoch 57, lr [tensor(0.0010)]
epoch 58, lr [tensor(0.0010)]
epoch 59, lr [tensor(0.0010)]

Note that I changed your approach:

and am decreasing the learning rate from 0.1 to 0.01 between 10 and 20 (I’ve also used smaller milestones for easier visualization).

Once the last milestone is reached, I’m just returning the last valid learning rate, but you can of course change it too.
You can also add more checks etc., but this code might work as a minimal working example.

2 Likes

Great! I am not so familiarized with programming yet, so this is a great help. I think it is useful to see other people implementing their own classes to get a grasp on this issue.