How to use Cosine Annealing?

I am still new to PyTorch and I am going off this link: https://pytorch.org/docs/stable/optim.html I don’t see many examples of it being applied online so this is how I thought it should look.

 Q = math.floor(len(train_data)/batch)

 lrs = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max = Q)

Then in my training loop, I have it set up like so:

 # Update parameters
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        lrs.step()

For the training loop, I even tried a different approach such as:

 # Update parameters
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        

        # Print interim results
        if b%10 == 0:
            print(f'epoch: {epoch:2}  batch: {b:4} loss: {loss.item():10.8f}  \
accuracy: {accuracy:3.3f}%')
            
        if b%214 == 0:

            train_acc.append(accuracy)
            train_losses.append(loss)
            train_correct.append(trn_corr)
            mean_loss = sum(train_losses)/len(train_losses)
            
            for param_group in optimizer.param_groups:
                lr_track.append(param_group['lr'])
            print("\n")
            lrs.step(mean_loss)

Where after every epoch it gets adjusted.

An example of implement Cosine Annealing + warm restarts can be found here.

How to implement torch.optim.lr_scheduler.CosineAnnealingLR?

hope that helps!

3 Likes