I cannot reproduce this issue inside the training loop:
optimizer = torch.optim.SGD([torch.randn(1, requires_grad=True)], lr=1.)
train_loader = DataLoader(torch.randn(100, 1))
scheduler = optim.lr_scheduler.OneCycleLR(optimizer, max_lr=1.,
steps_per_epoch=int(len(train_loader)),
epochs=1,
anneal_strategy='linear')
for data in train_loader:
optimizer.step()
scheduler.step()
scheduler.step() # raises error
Coud epochs
be smaller than the hparams['epochs']
, which is passed to the epochs
argument in the creation of the scheduler?
In my code snippet the error after the training loop is expected, since I used epochs=1
.