I am training a CNN with OneCycleLR
:
lr_scheduler = torch.optim.lr_scheduler.OneCycleLR(
optimizer, max_lr=max_learning_rate, epochs=100, steps_per_epoch=1,
pct_start=0.05, anneal_strategy='cos', cycle_momentum=True,
base_momentum=0.85,
max_momentum=0
)
This works well, except for the fact that I can’t train for more than 100 epochs. During my last training, I reached 100 epochs and I decided to train for 10 more. I couldn’t do it because of the following error:
ValueError: Tried to step 102 times. The specified number of total steps is 100
Is there a way I can keep training after 100 epoch using the last learning rate from the OneCycleLR?
I could create a OneCycleLR
with 110
but that would change the shape of the learning rate curve. Moreover, it would not be flexible enough if, for instance, I wanted to train for 115 epochs.