Understanding per layer learning rates with scheduler

Hey everybody,

I’m trying to use a LR-scheduler for transfer learning. My backbone (an efficientnet_v2) should also be tuned, but with lower learning rates than the classifier. AFAIK this could be done like this for at least a fixed learning rate:

LR = 1e-3

params = [
          {'params': model.conv1.parameters(), 'lr': LR / 10},
          {'params': model.bn1.parameters(), 'lr': LR / 10},
          {'params': model.layer1.parameters(), 'lr': LR / 8},
          {'params': model.layer2.parameters(), 'lr': LR / 6},
          {'params': model.layer3.parameters(), 'lr': LR / 4},
          {'params': model.layer4.parameters(), 'lr': LR / 2},
          {'params': model.fc.parameters()}
         ]


optimizer = optim.Adam(params, lr = FOUND_LR)

I now want to use a OneCycleLR, how do I get the scheduler to respect the defined LR fractions? How are scheduler and optimizer interacting with each other? Is the scheduler updating one (to me yet unknown) variable or does the scheduler update all the ‘lr’ fields of the parameter groups?

The learning rate schedulers will iterate all .param_groups of the optimizer as seen here.

Just to go back to the original topic, I can confirm what @funnym0nk3y sees:

when using OneCycleLR it doesn’t respect the difference in layer learning rates (while e.g. CosineAnnealingLR does).

Example:

optimizer = torch.optim.AdamW([
        {'params': self.backbone.decoder.parameters()},
        {'params': self.backbone.encoder.parameters(), 'lr': self.initial_lr*0.1}
    ], 
    lr=self.initial_lr, 
    weight_decay=self.weight_decay)

When then using:

scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(
                optimizer, T_max=self.trainer.max_epochs, eta_min=1e-5
            )

This works as expected, i.e. the LR for the encoder parameters stays a factor of 10 (disregarding eta_min funniness) lower than the decoder LR.

Using OneCycleLR however the decoder and encoder LR is the same at all times, so the behavior differs from CosineAnnealingLR, and from what I would naively expect.

I’m still trying to parse through the OneCycleLR code, but doesn’t seem like it ever refers back to current parameter LR when changing LRs.