Pytorch latest update(1.4) broke MultiStepLR: wrong LR after step from _get_closed_form_lr

Hi,
the setting are simple,
using simple SDG optimizer with start lr of 0.05, and using MultiStepLR lr scheduler with milestone of 2,4.

code for reproduce:
#####################################
from torch import optim
from torchvision.models import resnet50
from torch.optim import lr_scheduler

model = resnet50()
optimizer = optim.SGD(model.parameters(), lr=0.05)
ms_scheduler = lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[2,4])
lrs = [param_group[‘lr’] for param_group in optimizer.param_groups]
print(f"lr after lrs= {lrs}")
ms_scheduler.step(epoch=0)
lrs = [param_group[‘lr’] for param_group in optimizer.param_groups]
print(f"lr after lrs= {lrs}")
######################################

The bug happens if epoch is not None(i noted that that pytorch community tried to remove the option to pass epoch at step, but i will open a separate issue about that) at the init of _LRScheduler there is a call to self._get_closed_form_lr() and reset the value of the optimizer. In my case the excepted value in epoch 0 is 0.05 and not 0.005(it’s seems that he perform all the steps in the multistep in the init!!)

I see a code that handle this case when the epoch is None but buggy code for the other case.

I always pass the epoch when calling step( i have scheduler sequence wrapper that i wrote in order to change scheduler during training (most use cases are warm up and tear down lr scheduling ,but not only)