hello, I’m going to train a model with an SGD optimizer, and I want to divide the learning rate by a factor of 10 when iteration number reaches a specific number. How should I do that?

Hi, please have a look at pytorch learning rate schedulers to select the one that fits your use case.

You can do something like this,

```
if(epoch == specific_epoch):
for group in optim.param_groups:
group['lr'] /= 10
```

Although, sharing a minimal reproducible example does help people when debugging problems!

Hello, I saw them but they talked about decreasing at specific learning rate not iteration

hello, thanks for your attention

I wrote a similar code for specific iteration but it didn’t work, so I created this topic

are you sure this will be worked for decreasing at specific iteration?

Can you share a minimal reproducible example for your problem?

Hello, this is my training script and I want change learning rate during one epoch.

```
optimizer = optim.SGD(retinanet.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0001)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=1, verbose=True)
loss_hist = collections.deque(maxlen=500)
retinanet.train()
retinanet.module.freeze_bn()
print('Num training images: {}'.format(len(dataset_train)))
epochs = 100
for epoch_num in range(epochs):
retinanet.train()
retinanet.module.freeze_bn()
epoch_loss = []
for iter_num, data in enumerate(dataloader_train):
try:
optimizer.zero_grad()
if torch.cuda.is_available():
classification_loss, regression_loss = retinanet([data['img'].cuda().float(), data['annot']])
else:
classification_loss, regression_loss = retinanet([data['img'].float(), data['annot']])
classification_loss = classification_loss.mean()
regression_loss = regression_loss.mean()
loss = classification_loss + regression_loss
if bool(loss == 0):
continue
loss.backward()
torch.nn.utils.clip_grad_norm_(retinanet.parameters(), 0.1)
optimizer.step()
loss_hist.append(float(loss))
epoch_loss.append(float(loss))
print(
'Epoch: {} | Iteration: {} | Classification loss: {:1.5f} | Regression loss: {:1.5f} | Running loss: {:1.5f}'.format(
epoch_num, iter_num, float(classification_loss), float(regression_loss), np.mean(loss_hist)))
del classification_loss
del regression_loss
except Exception as e:
print(e)
continue
scheduler.step(np.mean(epoch_loss))
```