Fix momentum bug in CyclicLR

Hey, An issue was resolved in 20401 but I am still facing that. Any recommendations on how can I implement that with my network. Thanks.

optimizer = optim.Adam(net.parameters(), lr=0.01, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.001, max_lr=0.1,cycle_momentum=True)