Increase Learning Rate using lr_scheduler

I am trying to progressively increase the lr using a lr_scheduler but it is not possible due to their implementation:

class ConstantLR(_LRScheduler):
    def __init__(self, optimizer, factor=1.0 / 3, total_iters=5, last_epoch=-1, verbose=False):
        if factor > 1.0 or factor < 0:
            raise ValueError('Constant multiplicative factor expected to be between 0 and 1.')
   ...

Is there any way to achieve my goal (without implementing my own _LRScheduler?

I think a LambdaLR object should work:

model = nn.Linear(1, 1)
optimizer = torch.optim.SGD(model.parameters(), lr=1.)

factor = 2.
scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lambda epoch: (epoch+1) * factor, last_epoch=- 1, verbose=False)

for epoch in range(10):
    print(optimizer.param_groups[0]['lr'])
    optimizer.step()
    scheduler.step()