Hi. I created a network and I have to train it. I used this code to set the optimizer:
“”“optimizer = optim.Adam(model.parameters(), lr = 1e-2)”“”
and this one for the training:
“”“epochs = 20
for epoch in range(epochs):
loss = 0
if name == ‘main’:
for (batch_features, _) in train_loader:
batch_features = batch_features.view(-1, 10694).to(device)
optimizer.zero_grad()
outputs = model(batch_features)
train_loss = criterion(outputs[0], batch_features)
train_loss.backward()
optimizer.step()
loss += train_loss.item()
loss = loss / len(train_loader)
print(“epoch : {}/{}, loss = {:.6f}”.format(epoch + 1, epochs, loss))”“”
Is there a way to decrease the learning rate (for example lr = lr/1.1) every 3 epochs?
You only need to initialize the scheduler once, so you are correct to leave those first calls out of the loop. Within the outer loop, you can call scheduler.step() every three epochs using a conditional like if epoch % 3 == 0.
Actually, you don’t need the if epoch % 3 == 0 for StepLRScheduler, as it automatically does it for you (you specified to decrease the learning every three epochs when initializing scheduler).
Also, I’m not sure which documentation you are reading, but I believe the class you’re looking for is StepLR from torch.optim.lr_scheduler, rather than StepLRScheduler. Its arguments are called step_size and gamma rather than decay_t and decay_rate, and you need not pass epoch into scheduler.step().