Epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible

I am using a LambdaLR scheduler as follows:

lmbda = lambda epoch: (1 - epoch/num_epochs)**0.9
scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lmbda)

# training and validation loop
for epoch in range(1, num_epochs + 1):
    train_one_epoch(model, train_loader, train_criterion, optimizer, epoch=epoch)
    val_metric = validate(model, val_loader, val_criterion, epoch=epoch)

This is also how the documentation recommends here.

However, in PyTorch1.9.1, a Warning message is thrown as below:

/opt/conda/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:154: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
  warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)

I see the training loop is going through, but not sure if the lr is reducing in scheduler.step() as per the scheduler?

Any comments on the discrepancy between documentation and runtime Warning?