How to adjust learning rate according to batch step rather than epoch?

the tutorial shows learning rate decay according to epoch, but how to adjust lr according to batch?

1 Like

You could just call scheduler.step() inside your “batch loop”:

dataset = datasets.FakeData(size=200, transform=transforms.ToTensor())
loader = DataLoader(
    dataset,
    batch_size=10,
    shuffle=False,
    num_workers=0
)


model = nn.Linear(3*224*224, 10)
optimizer = optim.SGD(model.parameters(), lr=1.)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
criterion = nn.NLLLoss()

for epoch in range(10):
    for batch_idx, (data, target) in enumerate(loader):
        print('Epoch {}, Batch idx {}, lr {}'.format(
            epoch, batch_idx, optimizer.param_groups[0]['lr']))
        
        optimizer.zero_grad()
        output = model(data.view(10, -1))
        loss = criterion(output, target.long())
        loss.backward()
        optimizer.step()
        
        scheduler.step()

The scheduler itself does not contain any logic about the epochs, but just uses .step() to manipulate the learning rate of the optimizer.

2 Likes

got it, very helpful:kissing:

1 Like