Implement learning rate decay

Hi there,
I wanna implement learing rate decay while useing Adam algorithm. my code is show bellow:

def lr_decay(epoch_num, init_lr, decay_rate):
    '''
    :param init_lr: initial learning rate
    :param decay_rate: if decay rate = 1, no decay
    :return: learning rate
    '''
    lr_1 = init_lr * decay_rate ** epoch_num
    return lr_1

and the training function is:

def fit(x, y, net, epochs, init_lr, decay_rate ):
    loss_points = []    
    for i in range(epochs):
        lr_1 = lr_decay(i, init_lr, decay_rate)
        optimizer = torch.optim.Adam(net.parameters(), lr=lr_1)
        yhat = net(x)
        loss = cross_entropy_loss(yhat, y)
        loss_points.append(loss.item())
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

However, when I set decay rate = 1 (no decay), the output is much different with what I get before. my previous code is:

optimizer = torch.optim.Adam(net.parameters(), lr=constant)
def fit(x, y, net, optimizer, epochs):
    loss_points = []    
    for i in range(epochs):
        yhat = net(x)
        loss = cross_entropy_loss(yhat, y)
        loss_points.append(loss.item())
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

I think the only difference between the two codes is that when I implement learning rate decay, I need to create a new optimizer for each epoch. how can I fix my code?

Do you know about lr schedulers? https://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.ExponentialLR

Optimizer recreation won’t work correctly.

Hi Alex,
Thanks for your help, I implement my code according to the doc, and it works!