Learning scheduler

Hi everyone

I tried to use scheduler however I faced with the below error:
name ‘StepLR’ is not defined
has anyone know how I can fix it?
Below my I wrote my training codes:
#training loop
loss = 0
epoch_num = 0
error = []
scheduler = StepLR(optimizer, step_size=20, gamma=0.5)
for epoch in range(num_epochs):
for spec in train_loader:
img = spec.cuda()

output = model (img)
loss = criterion (output,img)

optimizer.zero_grad()
loss.backward()
optimizer.step()
error.append(loss.item())
scheduler.step()


if epoch%10 == 9:
  epoch_num += epoch_num
  plt.plot(error)


  print ('\r Train Epoch : {}/{} \tLoss : {:.4f}'.format (epoch+1,num_epochs,loss/32))

model_save_name = ‘Autoencoder for Li project 11 feb(2) (mid power) febTanh activation function Feb 2020 DMSO dataset’
path = F"/content/drive/My Drive/{model_save_name}"
torch.save(model.state_dict(), path)

thanks you

Which PyTorch version are you using?
If you are not using the latest stable release, could you please update?

1 Like

Hi Patrick

I am using latest version of pytorch.

How did you import it?

from torch.optim.lr_scheduler import StepLR

works for me.

1 Like

WOW

May be I out scheduler.step() in the wrong spot.
Should it be in epoch loop? or in train_loader loop

scheduler.step() should be right after optimizer.step() and both should be in the training loop.

for epoch in range(N_EPOCHS):
   for batch_i, (X, y) in train_dataloader:
     optimizer.step() 
     scheduler.step()
1 Like

I did this as well, however when I print my lr in every epoch I can see that it dose not change. :expressionless:

learning_rate = 0.0001

num_epochs = 200

model = Autoencoder().cuda()

criterion = nn.MSELoss()

optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate,

weight_decay=1e-5)

#training loop

loss = 0

epoch_num = 0

error = []

scheduler = StepLR(optimizer, step_size=30, gamma=0.5)

for epoch in range(num_epochs):

for spec in train_loader:

img = spec.cuda()

output = model(img)

loss = criterion (output,img)

optimizer.zero_grad()

loss.backward()

optimizer.step()

scheduler.step()

error.append(loss.item())

if epoch%10 == 9:

  epoch_num+=epoch_num

print (’\r Train Epoch : {}/{} \tLoss : {:.4f}’.format (epoch+1,num_epochs,loss/50))

print(learning_rate)

model_save_name = ‘Autoencoder for Artificial dataset (7) 3 class of data 12 feb Avgpool Tanh activation function’

path = F"/content/drive/My Drive/{model_save_name}"

torch.save(model.state_dict(), path)

plt.plot(error)

plt.xlabel(‘Number of iteration’)

plt.ylabel(‘Loss (MSE)’)

plt.title(‘Loss function vs # of iterations for Artificial dataset’)

even worst than that is, training is not working, loss stays constant

tnx
however when I print lr in every epoch I can see that, it dose not change.

You are currently printing the learning_rate variable, which is set to 0..0001.
Use print(scheduler.get_last_lr()) instead to get the learning rate, which will be applied.