# Implement learning rate decay

Hi there,
I wanna implement learing rate decay while useing Adam algorithm. my code is show bellow:

``````def lr_decay(epoch_num, init_lr, decay_rate):
'''
:param init_lr: initial learning rate
:param decay_rate: if decay rate = 1, no decay
:return: learning rate
'''
lr_1 = init_lr * decay_rate ** epoch_num
return lr_1
``````

and the training function is:

``````def fit(x, y, net, epochs, init_lr, decay_rate ):
loss_points = []
for i in range(epochs):
lr_1 = lr_decay(i, init_lr, decay_rate)
optimizer = torch.optim.Adam(net.parameters(), lr=lr_1)
yhat = net(x)
loss = cross_entropy_loss(yhat, y)
loss_points.append(loss.item())
loss.backward()
optimizer.step()
``````

However, when I set decay rate = 1 (no decay), the output is much different with what I get before. my previous code is:

``````optimizer = torch.optim.Adam(net.parameters(), lr=constant)
def fit(x, y, net, optimizer, epochs):
loss_points = []
for i in range(epochs):
yhat = net(x)
loss = cross_entropy_loss(yhat, y)
loss_points.append(loss.item())